1. Trang chủ
  2. » Luận Văn - Báo Cáo

International journal of computer integrated manufacturing , tập 24, số 3, 2011

97 356 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 97
Dung lượng 8,47 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The voting analytic hierarchy process method for selecting supplier.International Journal of Production Economics, 97, 308–317 proposed a voting analytic hierarchy process method forsele

Trang 2

An improved voting analytic hierarchy process–data envelopment analysis methodology

for suppliers selection

A Hadi-Vencheh* and M Niazi-Motlagh

Department of Mathematics, Islamic Azad University, Mobarakeh Branch, Mobarakeh, Isfahan, Iran

(Received 6 March 2010; final version received 3 January 2011)Selecting an appropriate supplier is now one of the most important decisions of the purchasing department Liu andHai (Liu, F.H.F and Hai, H.L., 2005 The voting analytic hierarchy process method for selecting supplier.International Journal of Production Economics, 97, 308–317) proposed a voting analytic hierarchy process method forselecting suppliers Despite its many advantages, Liu and Hai’s model (LH-model) has some shortcomings In thisarticle, the authors present an extended version of the LH-model for multi-criteria supplier selection problem Anillustrative example is presented to compare the authors’ model and the LH-model

Keywords: data envelopment analysis (DEA); analytic hierarchy process (AHP); voting analytic hierarchy process(VAHP); multi-criteria decision making (MCDM)

1 Introduction

Supplier selection and evaluation is increasingly seen

as a strategic issue for companies (Ceyhun and Irem

2007) Companies need to work with different suppliers

to continue their activities In manufacturing industries

the raw materials and component parts can equal up to

70% product cost In such circumstances the

purchas-ing department can play a key role in cost reduction,

and supplier selection is one of the most important

functions of purchasing management They enhance

customer satisfaction in a value chain Hence, strategic

partnership with better performing suppliers should be

integrated within the supply chain for improving the

performance in many directions including reducing

costs by eliminating wastages, continuously improving

quality to achieve zero defects, improving flexibility to

meet the needs of the end-customers, reducing lead

time at different stages of the supply chain, etc

(Kumar et al 2004, Amin and Razmi 2009) Selecting

the right supplier is always a difficult task for the

purchasing manager This is confirmed by many

researchers (Kazerooni et al 1997, Bevilacqua and

Petroni 2002, Humphreys et al 2003a, Kumar et al

2004, 2006, Ding et al 2005, Liu and Hai 2005, Guneri

and Kuzu 2009, Hadi-Vencheh 2011) Suppliers have

varied strengths and weaknesses which require careful

assessment by the purchasers before ranking can be

given to them So, every decision needs to be integrated

by trading off performances of different suppliers at

each supply chain stage (Liu and Hai 2005)

The analytic hierarchy process (AHP) has foundwidespread application in decision-making problems,involving multiple criteria in systems of many levels.The strongest features of the AHP are that it generatesnumerical priorities from the subjective knowledgeexpressed in the estimates of paired comparisonmatrices The method is surely useful in evaluatingsuppliers’ weights in marketing, or in ranking order,for instance It is, however, difficult to determinesuitable weight and order of each alternative (Lee2009) Supplier selection is essentially a multiplecriteria decision making (MCDM) problem, whichinvolves multiple assessment criteria such as cost,quality, quantity, delivery and so on Therefore,MCDM approaches can be used for suppliers assess-ment Of the MCDM approaches, the AHP method isparticularly suitable for modelling qualitative criteriaand has found extensive applications in a wide variety

of areas such as selection, evaluation, planning anddevelopment, decision making, forecasting, and so on.However, due to the fact that there are some cases inwhich a large number of suppliers have to be evaluatedand prioritised, while the AHP method can onlycompare a very limited number of decision alterna-tives, the pair-wise comparison manner is obviouslyinfeasible in this situation

Another way for gathering the decision makers’opinion and selecting a candidate among a set ofcandidates is preference voting In preferential votingsystems, each voter selects m candidates from among

*Corresponding author Email: ahadi@khuisf.ac.ir

International Journal of Computer Integrated Manufacturing

Vol 24, No 3, March 2011, 189–197

ISSN 0951-192X print/ISSN 1362-3052 online

Ó 2011 Taylor & Francis

DOI: 10.1080/0951192X.2011.552528

Trang 3

ncandidates (m n) and ranks them from the most to

the least preferred Each candidate may receive some

votes in different ranking places The total score of

each candidate is the weighted sum of the votes he/she

receives in different places The winner is the one

with the biggest total score So, the key issue of the

preference aggregation in a preferential voting system

is how to determine the weights associated with

different ranking places (Wang et al 2007)

Liu and Hai (2005) presented a voting AHP

method henceforth LH-model, for supplier selection

The voting AHP determines the weights of criteria not

by pair-wise comparisons but by voting The data

envelopment analysis (DEA) method was used to

aggregate votes each criterion received in different

ranking places into an overall score of each criterion

The overall scores were then normalised as the relative

weights of criteria They used Noguchi’s model

(Noguchi et al 2002) to determine weights of criteria

Despite its many advantages LH-model has some

shortcomings For instance, to determine the lower

bound of weights if we do not know the number of

voters we can not solve Noguchi’s model (Noguchi

et al 2002) And for finding weights of R criteria we

have to solve Noguchi’s model, R times (one linear

programming (LP) for each criterion weight) Besides,

steps 5 and 6 of the LH-model have some obscurities

and in step 6 we need a very high number of

questionnaires and score sheets to measure supplier

performance and identify supplier priority Of course,

inspection of questionnaires and score sheets for

determining scores is time consuming In this article

the authors present a new voting AHP–DEA (voting

analytic hierarchy process (VAHP)–DEA)

methodol-ogy to overcome shortcomings mentioned above

The remainder of this article is organised as follows

In section 2, the authors give a brief description of the

LH-model to provide a ground for the later

develop-ment of methodology Shortcomings of the LH-model

are presented in Section 3 The authors present our

method in Section 4 and illustrate it using a real

example In Section 5, the authors make a comparison

between our method, LH-model and the AHP

meth-odology proposed by Yahya and Kingsman (1999) for

supplier selection Section 6 concludes

2 The LH-model (Liu and Hai 2005)

In this section, the authors give a brief description of

LH-model for selecting suppliers

2.1 Step 1: Select suppliers’ criteria

All managers and supervisors of a company are

used in this step They were first briefed about the

overall objective of the study then specifically onsupplier rating of Dickson’s 23 criteria The criteriaobtained from group decision fall into two cate-gories, objective and subjective criteria The objec-tive criteria are those that can be evaluated usingfactual data, and include quality, delivery, respon-siveness, technical capability, facility, financial, etc.Subjective criteria are those that are difficult toquantify and thus have to be evaluated qualita-tively, and include discipline, management, etc Liuand Hai use the chosen criteria that must besatisfied in order to fulfil the goals of the selectingsuppliers

2.2 Step 2: Structure the hierarchy of the criteriaThe AHP was developed to provide a simple buttheoretically multiple-criteria methodology for evalu-ating alternatives Liu and Hai use the AHP toidentify subcriteria under each criterion, and toinvestigate each level of the hierarchy separately.They structure the problem into a hierarchy On thetop level is the overall goal of selection suppliers Onthe second level are criteria that contribute to thegoal On the third level are criteria that aredecomposed into subcriteria, and on the bottom (orfourth) level are candidate suppliers that are to beevaluated in terms of the subcriteria of the thirdlevel

2.3 Step 3: Prioritise the order of criteria

or subcriteria2.3.1 The first stage

In this step, Liu and Hai (2005) suppose that there are

nmanagers (or voters) in the study, and they will selectdifferent orders of criteria or subcriteria for thecandidates Every manager votes 1 to S S R, R

is the number of criteria For this purpose, assumethere are R criteria The criteria will be regarded ascandidates Hence, they get R orders from 1 to R andsum every vote in a table It commonly happens that,when one has to select among many objects, aparticular object is rated as the best in one evaluation,while others are selected by other evaluation methods.The managers get the order of criteria but not theweights The weight of each ranking is determinedautomatically by the total votes each candidateobtains

2.3.2 The second stageLiu and Hai use the same methodology to find theorders of subcriteria

Trang 4

2.4 Step 4: Calculate the weights of criteria

or subcriteria

2.4.1 The first stage

At the first stage of this step Liu and Hai (2005) use

Noguchi’s voting and ranking model (model 1) to

develop criteria varied level from hierarchy analysis

process This model is as follows:

2.4.2 The second stage

In this stage, Liu and Hai (2005) use the voting data of

subcriteria and the same method to determine weights

of the second level criteria The second level gives the

normalised values for all factors The sum of weights

for the factors of criteria must add up to 1 So a criteria

performance will be made up from weighting its

subcriteria weights

2.4.3 The third stage

The values in the bottom level are the global weight for

each of factors; they are the factor weight multiplied by

the criterion weight, so for a factor the value is criteria

weight multiply by subcriteria weight As the actual

performance data are collected for the factor value,

these weights in the bottom level can be used directly

to calculate the overall rating of the suppliers and to

provide a performance score that can be derived for

each factor

2.5 Step 5: Measure supplier performance

This step requires the managers to assess the

perfor-mance of all suppliers on the factors identified as

important for supplier scores A major problem was

thus to ensure consistency between the managers and

avoid any bias creeping in A set of standard guidelines

was set up after discussions with the managers (or

voters) of the company It is agreed that all

perfor-mance scores would be based on an 11-point grade

scale Each grade would have an adjective descriptor

and an associated point score or range of point scores

The managers preferred, in the first instance, to make

their judgement on the qualitative scale of adjectivaldescriptors The general performance score guidelinesare given in Table 1

Therefore each supplier can be awarded a scorefrom 0 to 10 on each subcriterion

2.6 Step 6: Identify supplier prioritySimple score sheets were provided to assist themanager to record the scores for each supplier oneach of the factors Once the scores for each factorhave been determined, then it is relatively easy tocalculate the resulting supplier rating scores Mathe-matically, the supplier rating is equivalent to the sum

of the product of each factor weight and the supplierperformance score on that factor

3 Issues on LH-model

In what follows the authors express ambiguities andshortcomings of VAHP methodology presented by Liuand Hai (2005) Firstly, it uses Noguchi’s strongordering, despite it has useful properties, this modelhas a main deficiency, that is, it uses the term 2/nS(Sþ1) to bound ursand make it greater than zero.There is a question: if we do not know the number ofvoters (n), what should we do?

Secondly, in step 4 to obtain the weight of eachcriteria and subcriteria selection suppliers, we have tosolve the model RþP times, where R is the number ofcriteria and P is the number of subcriteria Clearly this

is time consuming

Thirdly, in step 5, the managers have to compareeach supplier with respect to each factor and award ascore from 0 to 10 to each supplier on each factor Thisone by one assessment is time consuming, too.Fourthly, in step 6, it has not been identified thatthe scores which are applied to calculate the resultingsupplier rating scores are the average of managersscores or for each manager scores we calculate the totalscores and then average all of managers total scores toobtain resulting supplier rating scores

4 Proposed six step procedure

In this section, using a real example, the authorspropose a new six-step procedure for supplier selec-tion The authors illustrate our method by a real case

Table 1 Supplier criteria score guideline

Grade

Verydissatisfied Poor Acceptable Good

Verysatisfied

International Journal of Computer Integrated Manufacturing 191

Trang 5

study to better describe the model The case study is

related to the supplier selection of the Tiam Win

Company Tiam Win Company concentrates on

producing door and window in Shahr-e-kord, Iran

This company, to produce its products, is required to

purchase several kind of profile such as aluminium,

PVC, UPVC and so on with different sizes Hence,

Tiam Win Company buys its profiles from different

suppliers with respect to demand of customers and its

type of home and industrial customers Overall, Tiam

Win Company possesses several suppliers from

differ-ent countries, namely Germany, Italy, Turkey and

Iran The aforementioned company, to evaluate five

suppliers, applied our procedure as follows:

The problem is to select one of five candidate

suppliers The first step is the structuring of the

problem into a hierarchy (see Figure 1) On the top

level is the overall goal of selection suppliers On the

second level are seven criteria that contribute to the

goal On the third level are seven criteria that are

decomposed into 13 subcriteria, and on the bottom (or

fourth) level are five candidate suppliers that are to be

evaluated in terms of the subcriteria of the third level

4.1 Step 1: Select suppliers’ criteria

The authors suppose the number of managers or voters

is unknown They were first briefed about the overall

objective of the study then specifically on supplier

rating of Dickson’s 23 criteria (Dickson 1966) and the

other supplier selection criteria researches such as

(Lehmann and O’Shaughnessy 1974, Abratt 1986,

Weber et al 1991, Min and Galle 1999, Stavropolous

2000, Ghodsypour and O’Brien 2001, Humphreys

et al 2003b, Chen et al 2006, Lin and Chang 2008)

The criteria obtained from group decision fall into two

categories, objective and subjective criteria Theobjective criteria are those that can be evaluated usingfactual data, and include quality, financial, responsive-ness, accessibility and technical capability The authorswill use the above seven criteria that must be satisfied

in order to fulfil the goals of the selecting suppliers

4.2 Step 2: Structure the hierarchy of the criteriaThe AHP was developed to provide a simple buttheoretically multiple-criteria methodology for evalu-ating alternatives The authors use the AHP to identifysubcriteria under each criterion, and to investigateeach level of the hierarchy separately The 13subcriteria are quality-related certificates, factoryaudit, performance history, reputation, after saleservice, on time delivery, conveyance way, distance,product rang, design capability, attitude, communica-tion system and E-Commerce

4.3 Step 3: Prioritise the order of criteria orsubcriteria

4.3.1 The first stageLet us suppose that managers (or voters) in the studywill select different orders of criteria or subcriteria forthe candidates Every manager votes 1 to S R, R isthe number of criteria For this purpose, let us assumeseven criteria including (1) quality, (2) Background, (3)financial, (4) responsiveness, (5) accessibility, (6)technical capability and (7) management These criteriawill be regarded as candidates We will get seven ordersfrom 1 to 7 and a sum of every vote is shown in Table 2

It commonly happens that, when one has to selectamong many objects, a particular object is rated as the

Figure 1 Hierarchy of suppliers’ selection

Trang 6

best in one evaluation, while others are selected by

other evaluation methods The managers get the order

of criteria but not the weights The weight of each

ranking is determined automatically by the total votes

each candidate obtains

4.3.2 The second stage

The authors use the same methodology to find the

orders of these subcriteria, presented in Table 3

4.4 Calculate the weights of criteria or subcriteria

In this article, instead of Noguchi’s model, the authors

propose the following model This model is used to

develop criteria varied level from hierarchy analysisprocess

ws¼ 1;

ð2Þ

where xrsis the total votes of the rth criteria for the s thplace The above model maximises the minimum of thetotal scores of the R criteria and determines a commonset of weights for all the criteria In fact, this modelmaximises a (the minimum of the total scores) and theminimum weight ws at the same time and determinesthe most favourable weight for all criteria Indeed, wsisadded as a component of the objective function toforce wsnot to equal to 0

4.4.1 The first stageThe authors use the data of Table 2 and find theweights of seven criteria by Equation (2) Table 4shows that weight for quality, background, financial,responsiveness, accessibility, technical capability andmanagement are 10.4573, 6.5271, 8.0285, 5.5767,3.9734, 6.2837, and 6.1534, respectively After normal-ising these data, the results are 0.2225, 0.1389, 0.1708,0.1187, 0.0845, 0.1337 and 0.1309

4.4.2 The second stageThe authors use the data of Table 3 and the samemethod Table 5 shows the weights of the second levelcriteria The sum of weights for the factors of criteriamust add up to 1

4.4.3 The third stageThe authors obtain the global weight for each of thefactors by multiplying factor weight by the criterion

Table 2 Priority votes of seven criteria from respondents in the first stage

Propose model Noguchi’s model

Table 3 Priority votes of subcriteria from respondents in

the second stage

Trang 7

weight, so for factory audit factor the value is 0.5745

times 0.2225 As the actual performance data are

collected for the factor value, these weights in the

Table 6 can be used directly to calculate the overall

rating of the suppliers and to provide a performance

score that can be derived for each factor

4.5 Step5: Calculate supplier performance with

respect to factors

4.5.1 The first stage

This step again requires the managers to assess the

performance of all suppliers on the 14 factors identified

as important for supplier scores A major problem was

thus to ensure consistency between the managers andavoid any bias creeping in For this purpose, theauthors apply voting method like the authors used instep 3, that is, for each factor, every manager ordersthe suppliers and votes 1 to T (T P, P is the number

of suppliers) with respect to that factor Therefore toassist the manager to record the votes the authorsprovide a questionnaire with 14 columns and eachcolumn has P rows and at the top of each column theauthors write the title of each factor While managers

or experts vote and record their idea, we gather thesheets For each factor the authors prepare a table andenter the votes associated with that factor in it Forinstance, Table 7 shows the priority votes of fivesuppliers with respect to performance history

4.5.2 The second stage

By using the data of the last stage and the Equation (2)the authors obtain the score of each supplier withrespect to each factor The authors show this method

in Table 7 and found the weight of suppliers withrespect to performance history as shown in Table 8.Table 9 shows the scores of each supplierwith respect to each factor that was obtained by thesame methodology

4.6 Step 6: Identify supplier priorityWhenever the scores for each factor are determined,then it is relatively easy to calculate the resultingsupplier rating scores An example of this is shown inTable 10 Mathematically, the supplier rating isequivalent to the sum of the product of each factorglobal weight and the supplier performance score on

Table 6 Global weight of 14 factors

Global weightsProposed

Table 5 Weights of 13 subcriteria in the second stage

Criteria

Propose model Noguchi’s model

Trang 8

that factor The supplier rating value for supplier-1 is

obtained by summing up the products of the respective

elements in columns 3 and 4 for each row and given in

the final column The rating method used in supplier-1

can also be used to find the total scores of the other five

suppliers The supplier with the highest supplier rating

value should be regarded as the best performing

supplier and the rest can be ranked accordingly

Table 11 shows the rating value of each supplier and

its rank in the proposed method as well

4.7 Comparison

In this section the authors make a comparison between

our method, LH-model and the AHP method

pro-posed by Yahya and Kingsman (1999) In what follows

the authors compare the three methods, step by step:

As we see in Table 12, steps 1 and 2 are the same in the

three methods Step 3 is the same in the proposed

method and LH-model but differs from AHP In fact

this is the main difference between voting approaches

and the AHP This is why the authors call theseapproaches VAHP In this step, the voting methodsuse voting to prioritise the order of alternatives butAHP method uses comparison matrices that taketime, so if the number of criteria increase, pairwisecomparisons are certainly impossible to be made Thetraditional AHP method can only compare a verylimited number of decision alternatives, which isusually not more than 15 When there are hundreds

Table 9 The scores of suppliers with respect to 14 factors

Table 10 Rating of supplier-1

Trang 9

or thousands of alternatives to be compared, the

pair-wise comparison manner provided by the traditional

AHP is obviously infeasible To overcome this

difficulty, the authors combine the AHP with a new

voting DEA model and propose an integrated VAHP–

DEA methodology in this article The purpose of step

4 is the same as all of methods but the ways is different

The LH-model uses Noguchi’s model which has some

shortcomings mentioned before The authors proposed

a new DEA model which overcomes those

short-comings In step 5, AHP and LH-model use comparing

scores but the authors used again ‘voting’ and

proposed a model (2) to measure supplier

perfor-mance, it gives rise to avoid any bias creeping in and is

easy Finally, step 6 is the same as in the three

methods So the difference of LH-model and proposed

VAHP is steps 3, 4 and 5

5 Conclusion

Outsourcing decisions are an integral aspect of the

logistics function Traditionally, they have dealt

pri-marily with the supply of raw materials and component

parts and some services such as transportation In

recent years, with the increase in contract logistics,

many firms are outsourcing activities that were once

performed in-house To remain competitive with these

third-party providers, logistics managers must use more

sophisticated techniques when performing their duties

In this article, the authors proposed a new weighting

procedure instead of AHPs’ paired comparison for

selecting suppliers The proposed model uses an

integrated VAHP–DEA methodology to evaluate

alternatives It provides a simpler calculation of the

weights to be used and for scoring the performance of

suppliers It is shown that the new integrated VAHP–

DEA methodology is simple enough, easy to use,

applicable to any number of decision alternatives, and

particularly useful and effective for complex MCDM

problems with a large number of decision alternatives,

where pair-wise comparisons are certainly impossible

to be made It is expected that in the near future thismethod will be applied effectively to various issuessuch as policy making, business strategies andperformance assessment

References

Abratt, R., 1986 Industrial buying in high-tech markets.Industrial Marketing Management, 15 (4), 293–298.Amin, S.H and Razmi, J., 2009 An integrated fuzzy modelfor supplier management: A case study of ISP selectionand evaluation Expert Systems with Applications, 36,8639–8648

Bevilacqua, M and Petroni, A., 2002 From traditionalpurchasing to supplier management: A fuzzy logic-based approach to supplier selection InternationalJournal of Logistic: Research and Applications, 5 (3),235–255

Ceyhun, A and Irem, O., 2007 Supplier evaluation andmanagement system for strategic sourcing based on anew multicriteria sorting procedure International Journal

of Production Economics, 106, 585–606

Chen, T.C., Lin, C.T., and Huang, S.F., 2006 A fuzzyapproach for supplier evaluation and selection in supplychain management International Journal of ProductionEconomics, 102, 289–301

Dickson, G.W., 1966 An analysis of vendor selectionsystems and decisions Journal of Purchasing, 2 (1), 5–17.Ding, H., Lye`s, B., and Xiaolan, X., 2005 A simulationoptimization methodology for supplier selection pro-blem International Journal of Computer IntegratedManufacturing, 18 (2–3), 210–224

Ghodsypour, S.H and O’Brien, C., 2001 The total cost

of logistics in supplier selection, under conditions ofmultiple sourcing, multiple criteria and capacity con-straint International Journal of Production Economics,

73, 15–27

Guneri, A.F and Kuzu, A., 2009 Supplier selection by using

a fuzzy approach in just-in-time: A case study tional Journal of Computer Integrated Manufacturing, 22(8), 774–783

Interna-Hadi-Vencheh, A., 2011 A new nonlinear model for multiplecriteria supplier-selection problem International Journal

of Computer integrated Manufactoring, 24 (1), 32–39.Humphreys, P.K., McIvor, R., and Chan, F.T.S., 2003a.Using case-based reasoning to evaluate supplier environ-mental management performance Expert Systems WithApplications, 25, 141–153

Humphreys, P.K., Wong, Y.K., and Chan, F.T.S., 2003b.Integrating environmental criteria into the supplierselection process Journal of Materials Processing Tech-nology, 138, 349–356

Kazerooni, A., Chan, F.T.S., and Abhary, K., 1997 A fuzzyintegrated decision-making support system for schedul-ing of FMS using simulation International Journal ofComputer Integrated Manufacturing Systems, 10, 27–34.Kumar, M., Vrat, P., and Shankar, R., 2004 A fuzzy goalprogramming approach for vendor selection problem in asupply chain Computers and Industrial Engineering, 46(1), 69–85

Kumar, M., Vrat, P., and Shankar, R., 2006 A fuzzyprogramming approach for vendor selection problem in asupply chain International Journal of Production Eco-nomics, 101 (2), 273–285

Table 12 Differences between LH-model and proposed

model

1 Select suppliers, criteria Select suppliers, criteria

2 Structure the hierarchy

performance

Calculate suppliersweights with respect tofactors

6 Identify supplier priority Identify supplier priority

Trang 10

Lee, A.H.I., 2009 A fuzzy ahp evaluation model for buyer–

supplier relationships with the consideration of benefits,

opportunities, costs and risks International Journal of

Production Research, 47 (5), 4255–4280

Lehmann, D.R and O’Shaughnessy, J., 1974 Difference in

attribute importance for different industrial products

Journal of Marketing Research, 38 (1), 36–42

Lin, H.T and Chang, W.L., 2008 Order selection and

pricing methods using flexible quantity and fuzzy

approach for buyer evaluation European Journal of

Operational Research, 187 (2), 415–428

Liu, F.H.F and Hai, H.L., 2005 The voting analytic hierarchy

process method for selecting supplier International Journal

of Production Economics, 97, 308–317

Min, H and Galle, W.P., 1999 Electronic commerce usage

in business to business purchasing International

Jour-nal of Operations and Production Management, 19 (9),

909–921

Noguchi, H., Ogawa, M., and Ishii, H., 2002 The priate total ranking method using DEA for multiplecategorized purposes Journal of Computational andApplied Mathematics, 146, 155–166

appro-Stavropolous, N., 2000 Suppliers in the new economy.Telecommunications Journal of Australia, 50 (4), 27–29.Wang, Y.M., Chin, K.S., and Yang, J.B., 2007 Threenew models for preference voting and aggregation.Journal of the Operational Research Society, 58, 1389–1393

Weber, C.A., Current, J.R., and Benton, W.C., 1991 Vendorselection criteria and methods European Journal ofOperational Research, 50 (1), 2–18

Yahya, S and Kingsman, B., 1999 Vendor rating for anentrepreneur development programme: A case studyusing the analytic hierarchy process method Journal ofOperational Research Society, 50, 916–930

International Journal of Computer Integrated Manufacturing 197

Trang 11

Optimisation of weld deposition efficiency in pulsed MIG welding using hybrid neuro-based

techniquesKamal Pal, Sandip Bhattacharya and Surjya K Pal*

Department of Mechanical Engineering, Indian Institute of Mechanical Engineering, Kharagpur 721 302, West Bengal, India

(Received 12 January 2010; final version received 6 November 2010)The weld quality depends primarily on the degree of arc stability and the bead characteristics in gas metal arcwelding The weld deposition has to be enhanced to make the process economically feasible This article addressesmodelling and optimisation of deposition efficiency in highly non-linear pulsed metal inert gas welding The design

of experiments was performed using central composite response surface methodology for the model development.The back propagation neural network technique was found to be better than the response surface regression model.Two global optimisation techniques, namely, genetic algorithm and differential evolution, were then applied tomaximise the deposition efficiency The capability to identify the hidden optimum solutions using differentialevolution technique was found to be better than genetic algorithm

Keywords: peak voltage; pulse frequency; pulse on-time; torch angle; arc stability; optimisation; GA; DE

neuro-1 Introduction

The arc stability, in gas metal arc welding (GMAW),

depends on material transfer behaviour and arc shape

variation The deposition efficiency is an economic

factor, such as weld productivity It increases with the

reduction of spatter, caused by higher arc stability, in

pulsed gas metal arc welding (P-GMAW) P-GMAW

is widely used, especially in thin sheet metal joining It

provides a stable spray transfer with reduced heat

input (Smati 1986, Thamodharan et al 1999) There

are various pulse parameters, in addition to normal arc

welding parameters in P-GMAW The arc stability, as

well as weld quality, can be significantly improved by

controlling the pulse parameters (Tong et al 2001,

Ghosh et al 2007) The arc stability is best for one

droplet per pulse (ODPP) condition with the droplet

diameter close to the electrode wire diameter (Amin

1983, Allum 1985, Kim 1989) This can be achieved by

selecting the appropriate amplitude and duration of

peak current, which is higher than transition current to

ensure detachment (Mike and Kemppi 1989)

Signifi-cant efforts have been made to achieve ODPP

conditions in P-GMAW (Zhang et al 2000, De

Miranda et al 2007)

Various mathematical models have been developed

to monitor the arc stability (Benyounis and Olabi

2008) The conventional techniques focus mainly on

the mean or the variance of the performance

char-acteristics The dual response approach considers both

mean and variance to develop the model (Kim andRhee 2004) The model has been further used foroptimisation However, GMAW processes are highlydynamic and non-linear with a multitude of uncontrol-lable factors, which suggests the need for an adaptiveintelligent system to characterise and then furthermonitor the process Thus, various evolutionaryalgorithms and computational networks have alsobeen developed, which considers the uncertaintyfeatures of the process These tools may improve themodel, with the occurrence of incremental learning asnew data become available Thus, these techniques areused in a wide variety of applications, from classifica-tion and pattern recognition to optimisation andcontrol In recent years, soft computing tools havealso been used with numerical techniques to predictand optimise GMAW parameters more accurately(Moon and Na 1997, Kim and Rhee 2002, Olabi et al.2006)

Most of the conventional robust process designtechniques have been used to maximise the processperformance while minimising the expected loss (Allen

et al.2001) The response surface methodology (RSM)and Taguchi method have been applied widely inGMAW optimisation (Song et al 2005, Hsiao et al

2008, Balasubramanian et al 2009, Giridharan andMurugan 2009, Kumar and Sundarrajan 2009) How-ever, these techniques are limited to regular experi-mental regions This limitation can be overcome with

*Corresponding author Email: skpal@mech.iitkgp.ernet.in

Vol 24, No 3, March 2011, 198–210

ISSN 0951-192X print/ISSN 1362-3052 online

Ó 2011 Taylor & Francis

DOI: 10.1080/0951192X.2010.542181

Trang 12

the introduction of genetic algorithm (GA) (Correia

et al 2005) It can generate global optimum point

rather than local optimum solutions (Tarng et al 1999,

Huang et al 2007) However, there is a risk of

insufficient sweeping of the search space with improper

parameter settings in GA (Correia et al 2004) The

controlled random search algorithm similar to GA has

been used to overcome these difficulties (Kim et al

2005) The adaptive gradient descent neural network

has also been found to be useful in GMAW

optimisa-tion (Meng and Buffer 1997) The GA technique has

been applied on the trained neural network model,

called neuro-GA, to improve the optimisation

cap-ability (Tseng 2006, Park and Rhee 2008)

In recent years, the differential evolution (DE)

technique has been applied to improve the training of

gradient decent artificial neural networks (ANNs) (Du

et al 2007, Slowik and Bialko 2008) The neuro-DE

algorithm has been applied to various areas such as

weather forecasting (Abdul-Kader 2009) and

bank-ruptcy prediction in banks (Chauhan et al 2009) The

DE approach has been also used for tuning the PID

controller of MIMO systems (Iruthayarajan and

Baskar 2009), highway network capacity optimisation

(Koh 2009) and reliability-redundancy optimisation

(Coelho 2009) This approach has been found to be

more useful than GA for better convergence in case of

non-linear systems (Subudhi et al 2008)

In the present work, the design of experiments was

performed using half fractional central composite

RSM in pulsed metal inert gas welding (P-MIGW)

The welding torch angle (yt), welding speed (S) and

wire feed rate (F), along with three pulse parameters,

namely, peak voltage (Vp), pulse frequency (fp) and

pulse on-time (tp), were considered for development of

the deposition efficiency model using RSM as well asback propagation neural network (BPNN) Theresponse surface method was found to be inadequate.Therefore, the optimisation of process parameters formaximum deposition efficiency was processed with twodifferent implementations, GA and DE on the devel-oped BPNN model

2 Experimental procedure

In this work, a voltage-controlled P-MIGW machine(FRONIOUS make with TRANSARC 500 powersource and FRONIUS VR131 control unit) was used.The experiments were carried out on 6-mm mild steelplates using copper-coated mild steel filler wire (ESAB,S-6 wire, 1.2-mm diameter), using Butt weldingmethod The schematic diagram of the experimentalset-up at 08 torch angle (perpendicular welding) isshown in Figure 1 A four-roller drive system fed theelectrode wire to the welding gun The design ofexperiments was performed using central compositeRSM

The chemical composition of the work material wasobtained by optical emission spectroscopy analysis asshown in Table 1 Pure argon (99.9%) was used asshielding gas at a pressure of 10 kgf/cm2with flow rate

of 15 L/min The welding torch tip to base platedistance was maintained at 15 mm Six processparameters: welding speed (S), wire feed rate (Fw),pulse frequency (fp), pulse on-time (tp), peak voltage(Vp) and torch angle (yt) were considered in thisinvestigation

The specimens were prepared with a V-shapedgroove having the groove angle, the root face and theroot gap of 608, 1.5 and 0.5 mm, respectively The faces

Figure 1 Schematic diagram of the experimental set-up in perpendicular welding

International Journal of Computer Integrated Manufacturing 199

Trang 13

of each pair of specimens were cleaned by a surface

grinder Each pair of plates was tack welded at the two

ends to make a Butt weld joint The weight of each pair

of plates before (Wi) and after (Wf) welding was

measured by electronic balance (A and D Company

Limited, GF-3000) weighting equipment The

deposi-tion efficiency (Zd) can be expressed as the ratio of

actual enhancement of a pair of base plates’ weight due

to welding to its theoretical value (Wd) related to wire

feed rate (F) as per Equation (1), where, ‘l’ and ‘tw’

indicate the mass per unit length of the electrode wire

and welding time duration, respectively

Zd¼Wf Wi

Wd ¼Wf Wi

3 Development of design of experiments

Various trial experiments were carried out to set the

range of each process parameter for acceptable weld

quality In the present work, RSM was used as design

of experiment technique Half fractional central

composite RSM (a¼2.378) with nine centre point

experiments were designed The coded value of the

upper and lower level for each process parameter was

þ2.378 and 72.378, respectively The levels and their

corresponding actual values are shown in Table 2 The

negative value of torch angle indicated backhand

welding, whereas positive value showed forehand

welding as shown in Figure 2 The torch perpendicular

condition was represented by 08 torch angle The

actual values of each parameter were adjusted as per

available settings in the welding machine and the

motor attached with welding table

The coded design matrix containing a total of 53

experiments developed using MINITAB software

(release 13.31, Minitab Inc 2002), is shown in Table

3 However, the experiments were performed randomly

to avoid the possibility of systematic error in theprocess

4 Modelling of deposition efficiencyThe pulse parameters highly influence the arc stability

in P-GMAW (Pal et al 2009a) The torch position andits direction during welding also affect the weld qualityand deposition efficiency (Nouri et al 2007, Kannanand Yoganandh 2009, Pal et al 2009a) The processinputs with corresponding deposition efficiency of the

53 number of experiments are shown in Table 4 Atotal of nine centre point experiments (experiment no

35, 37, 39, 40, 41, 48, 51, 52 and 53) having sameprocess parameter values were used to check therepeatability of the deposition efficiency

The ANN was also considered, along with ematical models due to non-linear nature of the arcduring welding In this work, RSM and BPNNtechnique were used to develop the model of deposi-tion efficiency

math-4.1 Development of mathematical model

A response surface is a functional mapping of variousprocess parameters to a single output feature In thepresent research, a second-order polynomial responsesurface model was developed using 53 sets of data tocorrelate six input process parameters: S, Fw, yt, Vp, fpand tpwith the output variable, deposition efficiency.The commercially available software, MINITAB, wasused for the model development and further statisticalanalysis to check the adequacy of the model

The significant and insignificant coefficients werecalculated using ‘student’s t-test’ by comparing theirvalues with standard tabulated data at their corre-sponding degree of freedom and 95% confidence level.When the calculated value of ‘t’ corresponding to acoefficient exceeds the standard tabulated value, thecoefficient may be considered as significant Thesignificant regression coefficients were recalculated todevelop the final model (Equation (2)) The adequacy

of the model was tested with 95% confidence levelusing the analysis of variance (ANOVA) technique

Table 1 Chemical composition (wt %) of the base plate

0.208 0.171 0.489 0.088 0.047 0.018 0.008 0.007

Table 2 Process parameters and their levels

Level

Trang 14

The ANOVA result of the reduced model is shown in

Table 5 The acceptance of these models mainly depends

on P, F and R2values P value indicates the probability

of significance of the model, which should be less than

0.05 at 95% confidence level The P value of the reduced

modified regression equation was found to be improved

from 0.112 (initial full model) to 0.035 (less than 0.05)

The F value of the model has to be higher than the

tabulated F value at 95% confidence level at respective

degrees of freedom of the regression model The F-value

criterion for initial regression model was not satisfied

This criteria was fulfilled in the modified model as F

value was 2.05 which is more than tabulated F value

another essential criterion for accepting the developed

model This source of variation should not be

pre-dominant Hence, the F ratio should be less than the

tabulated F ratio at a specified confidence level (95%)

for the lack-of-fit consideration It was also found to be

satisfied as F value was 1.03 (F0.05,24,20¼2.08) However,

the R2value (63.8) was found to be poor Therefore, this

response surface regression model was not highly

adequate to represent the relationship between the

deposition efficiency with process parameters

4.2 Development of ANN model

ANNs are computational models inspired by the central

nervous system comprising neurons The multi-layered

perceptrons, generally trained using the error backpropagation algorithm, has been popularly used inweld modelling (Kim et al 2001; 2002; Pal et al 2008).The network is built up of numerous individual unitscalled neurons A typical feed forward network isarranged into an input layer, an output layer and anynumber of hidden layers Each layer comprises avariable number of nodes as neurons In this research,

a code for multi-neuron, multi-layered ANN model wasdeveloped in C programming language, for mapping theP-MIGW process parameters to weld deposition effi-ciency A schematic representation of fully connectedmulti-neuron, single hidden layered ANN architecture,which was employed in this research, is shown in Figure

3 The input layer comprises six nodes corresponding tosix input parameters and the output layer has only onenode corresponding to deposition efficiency The num-ber of nodes in the hidden layer was varied from 1 to 30

to obtain the optimal prediction accuracy

The summation of the products of weight of eachnode of previous layer (wji) and the correspondinginputs (yi) gives us the input of jth neuron Eachneuron accepts the weighted sum of inputs (I) to it andoutputs a single value (O) depending on its transferfunction (f) (Equation (3)) The log-sigmoidal transferfunction was used in this work as the activationfunction for the hidden layer to establish the non-linearity of the process

O¼ fðIÞ ¼ fðX

The outputs of any layer, other than the outputlayer were used as the inputs of the succeeding layer.Thus, the network provides a non-linear mappingbetween input parameters and output features Eachnetwork was trained using a set of known input andoutput values Training algorithms change the inter-neuron weights in such a way that the error function

Figure 2 Schematic representation of different torch angles in P-MIGW

International Journal of Computer Integrated Manufacturing 201

Trang 15

(E), which is related to the difference between the

target values (Ti) and the actual output (Oi) values

(Equation (4)), is reduced

E¼ 1N

XN i¼1

The entire ‘knowledge’ of the network is stored asthe inter-neuron weights The present work uses thegradient descent error back-propagation algorithm(Werbos 1974) to train the network The network isadjusted to reduce the overall mean square error

Table 3 Coded design matrix using RSM

yt(deg)

S(mm/

s)

F(m/

min)

Vp(Volt)

fp(Hz)

tp(ms)

Zd(%)

Trang 16

(MSE) in back-propagation training (Werbos 1990).

The synaptic weights between nodes are modified from

Wold to Wnew according to an error correction chain

rule (Equation (5)) during the backward pass, based on

the gradient descent technique to minimise the MSE

between actual pth output (Op ) and desired pth output

(Tp ) for the total number of training pattern (N) as

XP p¼1

Tkp Ok p

ð6Þ

The learning rate was adjusted to reduce MSE Themomentum coefficient (a) was also used to maintainthe stability of Z with adequate learning according todelta rule

In this work, the BPNN model was developedbased on the same experimental dataset which wasused to develop response surface regression model Thewhole dataset was normalised between 0.1 and 0.9 Theperformance of the BPNN model depends on thenetwork parameters, like number of neurons in hiddenlayer (h), learning rate (Z) and momentum coefficient(a) Therefore, achieving optimal architecture is quite adifficult task Several trials were made to finally obtainthe optimal architecture, which can provide theminimum MSE The full experimental dataset wasdivided into a ‘training dataset’ and a ‘testing dataset’.The testing patterns were randomly chosen from thetotal dataset The overtraining problem was avoided

by using cross-validation of ‘training patterns’ and

‘testing patterns’ from the complete experimental

Table 5 ANOVA table for deposition efficiency (Zd) model

SeqSS

AdjSS

Figure 3 Schematic representation of ANN model

International Journal of Computer Integrated Manufacturing 203

Trang 17

dataset during training This process was repeated

using random selection of different subsets of data to

check the generalisation capability of the network

Finally, six randomly chosen experimental data

(experiment no 5, 12, 20, 31, 39 and 49, highlighted

as italics in Table 4) were used as ‘testing dataset’ The

networks were compared on the basis of their

prediction accuracy in testing by training up to a

maximum of 100,000 iterations or until MSE in testing

reaches 0.005 Once the models have been developed

using 46 numbers of training patterns, they have been

validated by the testing dataset to test the prediction

capability of the networks The optimum architecture

was found by varying the number of neurons in the

hidden layer along with the variation of Z and a This

evaluation was carried out by the determination of

MSE in testing based on the absolute value of the

deposition efficiency It has been found that 6-8-1

architecture provides the best data fitting capability

with Z and a being 0.8, and 0.5, respectively This

optimum architecture provided the minimum MSE in

training and testing as 0.0136 and 0.0063, respectively

4.3 Comparison of the developed BPNN and RSM

model

Prediction capability of the developed models was

indicated by error percentage in prediction of

deposi-tion efficiency in this case The percentage of predicdeposi-tion

error was calculated by Equation (7)

Thus, prediction error of each six testing patterns is

shown in Figure 4 using BPNN model as well as RSM

regression model It indicated that the percentage error

for all the testing patterns is within in between +5%

using BPNN model, whereas it was more than 10% in

case of RSM model

The mean absolute prediction error was obtained

by averaging the prediction error of all six testing

patterns It was calculated as per Equation 8

Mean absolute prediction errorð%Þ ¼

Thus, the mean absolute prediction error was

7.87% using reduced response surface regression

model, which was found to be improved to 1.67%using BPNN model Based on the detailed analysis, itmay be concluded that BPNN is more accurate thanRSM model Therefore, BPNN model was used forparametric study and further optimisation of deposi-tion efficiency

5 Parametric study on deposition efficiencyThe effect of each process parameter on depositionefficiency was investigated keeping other parametersconstant at a specific level coded by 72.378 toþ2.378

as discussed in Section 3 The BPNN model was used

to predict the deposition efficiency with the variation ofeach process parameter at these respective levels Theinfluence of torch angle, peak voltage and pulsefrequency was found to be predominant at a particularparametric level As the torch angle became positive(forehand welding) deposition efficiency was found to

be reduced due to high amount of spatter caused byimproper gas shielding (Figure 5(a)) The depositionefficiency increased with an enhancement of peakvoltage as well as pulse frequency due the better arcstability (Figure 5d and e) However, the depositionefficiency was not significantly influenced by pulse on-time, except at high negative torch angle (backhandwelding) as shown in Figure 5f It improved slightly ateither low welding speed or low wire feed rate, except

in backhand welding (Figure 5b and c)

This parametric investigation indicated that thedeposition efficiency increased significantly with higherpeak voltage, higher pulse frequency and higher pulseon-time along with negative torch angle However, itmay be improved with different parametric combina-tions using optimisation techniques

Figure 4 Comparison of prediction error in ANN andRSM model

Trang 18

Figure 5 Effect of process parameter on deposition efficiency for various parametric levels.

International Journal of Computer Integrated Manufacturing 205

Trang 19

6 Optimisation of deposition efficiency

The optimisation of process parameters for maximum

deposition efficiency was determined using two global

evolutionary optimisation techniques, namely GA and

DE, whose output was processed through the

pre-trained BPNN model

6.1 Neuro-GA optimisation

GA belongs to a type of optimisation algorithms

known as evolutionary algorithms (Ba¨ck and Schwefel

1993) These algorithms attempt to imitate the genetic

evolution process to solve exploration and

optimisa-tion problems (Fonseca and Fleming 1995) In

conventional GA, potential solutions are coded into

binary strings An initial population of potential

solutions (chromosomes) is created and their fitness

evaluated using a fitness function In each generation,

pairs of chromosomes are selected (based on their

fitness) These chromosomes are recombined

(cross-over) and arbitrarily mutated to produce offspring for

the subsequent generation (Mitchell 1998) Neuro-GA

optimisation technique is a combination of neural

networks and GAs wherein a neural network is used to

train the network before the optimisation process The

basic sequence of a neuro-GA based optimisation is

shown in Figure 6 The major parameters which

control the optimisation process are the cross-over

probability (CR) and mutation factor (FM) In this

work, GA based optimisation process was controlled

by population size (20–100), cross-over (0.1–0.9) and

mutation rate (0.1–0.9) The maximum number of

iterations considered in the optimisation process was

150

The optimal parameter setting for maximum weld

deposition efficiency is determined by GA-based

optimisation, where the fitness values were calculated

using the previously trained BPNN model The initial

population of potential solutions (chromosomes) was

chosen randomly within the experimental parameter

range and then, response feature of each solution was

computed using trained BPNN model to be fed into

GA The GA performed various genetic operations

(selection, crossover and mutation) to generate a new

population and continued until optimality was

achieved In each generation, pairs of chromosomes

were selected based on their fitness value

6.2 Neuro-DE optimisation

DE is a stochastic evolutionary algorithm like GA

This approach employs differential mutation instead of

simple mutation as used in GA It is a recent

population-based algorithm developed by Storn and

Price (1995) Generally, the solutions to the problem to

be optimised are real-encoded DE starts with arandomly chosen population of potential solutions aswith GAs (Mayer et al 2005) The reproductionmethod of DE is different from other evolutionarymechanisms Each population element is addressed inturn and a challenger member is created for eachmember The challenger member either copies theparent gene or a new gene is formed by thecombination of the corresponding gene of threerandomly chosen members for each gene of the parentmember This selection is determined by the generatedrandom number and cross-over rate If the randomnumber is less than cross-over rate, then the chosengene is copied to the corresponding gene of thechallenger member Otherwise, the new gene is formedfrom the consequent genes of three randomly chosenpopulation members according to Equation (9) Thesubscripts 1, 2 and 3 indicate to the three randomlyselected members of the initial population Thisprocess is repeated for all genes of the selectedpopulation member as well as for all populationmembers creating as many challenger members aspopulation members Finally, the selection is madebetween the chosen member and the correspondingchallenger member with respect to the objectivefunction value

xi;j¼ x1;i;jþ FAðx2;i;j x3;i;jÞ ð9Þ

The cross-over rate (CR) and differential variationamplification factor (FA) are the control parameters in

DE It is easier to execute relative to other tionary algorithms It requires fewer lines of computercode to execute while still providing the necessaryreproduction mechanisms The optimisation using DE

evolu-is relatively simple and less memory intensive ing better convergence than GA (Mayer et al 2005).Neuro-DE is a combination of neural network and DE

provid-in which DE is used to optimise the process variablesfor maximum fitness value, whereas neural network isused to train the network before optimisation starts.The neuro-DE has not yet been applied in manufac-turing optimisation problems, although it has beenshown to be a powerful and superior algorithm (Du

et al 2007, Slowik and Bialko 2008, Subudhi et al.2008) The basic configuration of neuro-DE optimisa-tion technique is represented in Figure 7 Theperformance of DE is highly sensitive to the selection

of control parameters Four major parameters wereconsidered to control the DE optimisation process, viz.population size, i.e number of possible solutions underconsideration (20–100), cross-over rate (0.1–0.9),differential variation amplification factor (0.1–0.9)and number of iterations (150)

Trang 20

6.3 Optimisation performance of neuro-GA and

neuro-DE techniques

The optimal values of process parameters for

max-imum deposition efficiency obtained by neuro-GA and

neuro-DE are shown in Table 6 The network

parameters associated with these solutions for each

optimisation technique are mentioned The optimum

values of torch angle were found to be negative as

found in parametric study The actual available torch

angles considered for the validation experiments are

shown within brackets The maximum depositionefficiency was higher using DE technique rather than

GA The validation experiments also conclude thesame It could not be further improved using theseoptimisation tools The global optimum solutionsusing neuro-GA technique almost followed the para-metric investigation It was found to be higher athigher value of peak voltage, pulse frequency and pulseon-time with low welding speed and wire feed rate inbackhand welding However, the Neuro-DE indicatedmaximum deposition efficiency with low pulse

Figure 6 Schematic representation of neuro-GA optimisation steps

International Journal of Computer Integrated Manufacturing 207

Trang 21

frequency with high pulse on-time because of better arc

stability (metal transfer regularity) It was interesting

to note that the two global optimum solutions using

neuro-DE were different only in wire feed rate value

This improvement was also noticed somewhat in the

parametric study However, the validation result

showed that low wire feed rate was slightly better

than high wire feed rate in this case Thus, there were

some hidden optimum solutions, which were identified

by neuro-DE technique but not by the neuro-GA

technique

7 Concluding remarksThe arc stability can be maintained with properadjustment of process parameters to improve welddeposition The deposition efficiency is more accuratelypredicted by BPNN model as compared to conventionalresponse surface regression equation due to the non-linear nature of the process with a lot of uncertainty ofP-MIGW The mean absolute prediction error wasfound to be significantly reduced from 7.87% usingRSM to 1.67% at Z¼0.8 and a¼0.5 with 6-8-1 networkstructure using BPNN The negative torch angle was

Figure 7 Schematic representation of neuro-DE optimisation steps

Trang 22

found to be better to maintain a stable arc due to

improvement in gas shielding during welding The

optimisation of deposition efficiency using neuro-DE

technique was found to be better than neuro-GA

technique The neuro-DE technique properly identifies

the hidden optimum solutions, which indicates its

superiority over neuro-GA due to its difference in

genetic operations during the optimisation process

Acknowledgements

The authors acknowledge the assistance and support

provided by Steel Technology Center and Welding

Labora-tory of the Department of Mechanical Engineering of IIT

Kharagpur

References

Abdul-Kader, H.M., 2009 Neural networks training based

on differential evolution algorithm compared with other

architectures for weather forecasting International

Jour-nal of Computer Science and Network Security, 9, 92–99

Allen, T.T., et al., 2001 A method for robust process design

based on direct minimization of expected loss applied to

arc welding Journal of Manufacturing Systems, 20 (5),

329–348

Allum, C.J., 1985 Welding technology data: pulsed MIG

welding weld Metal Fabrication, 53, 24–30

Amin, M., 1983 Pulsed current parameters for arc stability

and controlled metal transfer in arc welding Metal

Construction, 15, 272–278

Ba¨ck, T and Schwefel, H.P., 1993 An overview of

evolutionary algorithms for parameter optimization

Evolutionary Computation, 1 (1), 1–23

Balasubramanian, V., et al., 2009 Application of response

surface methodology to prediction of dilution in plasma

transferred arc hardfacing of stainless steel on carbon

steel International Journal of Iron and Steel Research, 16

(1), 44–53

Benyounis, K.Y and Olabi, A.G., 2008 Optimization of

different welding processes using statistical and

numer-ical approaches – a reference guide Advances in

Engineering Software, 39, 483–496

Chauhan, N., Ravi, V., and Chandra, D.K., 2009

Differ-ential evolution trained wavelet neural networks:

appli-cation to bankruptcy prediction in banks Expert

Systems with Applications, 36, 7659–7665

Coelho, L.S., 2009 Reliability–redundancy optimization by

means of a chaotic differential evolution approach

Chaos, Solitons and Fractals, 41, 594–602

Correia, D.S., et al., 2004 GMAW welding optimization

using genetic algorithms Journal of the Brazilian Society

of Mechanical Science & Engineering, 16 (1), 28–33

Correia, D.S., et al., 2005 Comparison between geneticalgorithms and response surface methodology in GMAWwelding optimization Journal of Materials ProcessingTechnology, 160, 70–76

De Miranda, H.C., Scotti, A., and Ferraresi, V.A., 2007.Identification and control of metal transfer in pulsedGMAW using optical sensor Science and Technology ofWelding & Joining, 12, 249–257

Du, J.X., et al., 2007 Shape recognition based on neuralnetworks trained by differential evolution algorithm.Neurocomputing, 70, 896–903

Fonseca, C.M and Fleming, P.J., 1995 An overview ofevolutionary algorithms in multiobjective optimization.Evolutionary Computation, 3 (1), 1–25

Ghosh, P.K., et al., 2007 Arc characteristics and behaviour

of metal transfer in pulsed current GMA welding ofaluminium alloy Journal of Materials Processing Tech-nology, 194 (1–3), 163–175

Giridharan, P.K and Murugan, N., 2009 Optimization ofpulsed GTA welding process parameters for the welding

of AISI 304L stainless steel sheets International Journal

of Advanced Manufacturing Technology, doi: 10.1007/s00170-008-1373-0

Hsiao, Y.F., Tarng, Y.S., and Huang, W.J., 2008 tion of plasma arc welding parameters by using theTaguchi method with the grey relational analysis.Materials and Manufacturing Processes, 23, 51–58.Huang, Y.W., Tung, P.C., and Wu, C.Y., 2007 Tuning PIDcontrol of an automatic arc welding system using aSMAW process International Journal of AdvancedManufacturing Technology, 34, 56–61

Optimiza-Iruthayarajan, M.W and Baskar, S., 2009 Evolutionaryalgorithms based design of multivariable PID controller.Expert Systems with Applications, 36, 9159–9167.Kannan, T and Yoganandh, J., 2009 Effect of processparameters on clad bead geometry and its shape relation-ships of stainless steel claddings deposited by GMAW.International Journal of Advanced Manufacturing Tech-nology, 47 (9–12), 1083–1095

Kim, Y.S., 1989 Metal transfer in gas metal arc welding.Thesis (PhD) Cambridge, MA: MIT Press

Kim, D., Kang, M., and Rhee, S., 2005 Determination ofoptimal welding conditions with a controlled randomsearch procedure Welding Journal (Supplement), 125s–130s

Kim, I.S., et al., 2001 Development of an intelligent systemfor selection of the process variables in gas metal arcwelding processes International Journal of AdvancedManufacturing Technology, 18, 98–102

Kim, D and Rhee, S., 2002 Design of a robust fuzzycontroller for the arc stability of CO2 welding processusing the Taguchi method IEEE Transactions onSystems, Man, and Cybernotics – Part B: Cybernotics,

Trang 23

Kim, D and Rhee, S., 2004 Optimization of a gas metal arc

welding process using the desirability function and the

genetic algorithm IMechE, Proceedings of

Instrumenta-tion and Mechanical Engineers Part B: Journal of

Engineering Manufacture, 218, 35–41

Kim, I.S., et al., 2002 A study on prediction of bead height

in robotic arc welding using a neural network Journal of

Material Processing Technology, 130–131, 229–234

Koh, A., 2009 An adaptive differential evolution algorithm

applied to highway network capacity optimization

Applications of Soft Computing, 52, 211–220

Kumar, A and Sundarrajan, S., 2009 Optimization of

pulsed TIG welding process parameters on mechanical

properties of AA 5456 aluminum alloy weldments

Materials and Design, 30, 1288–1297

Mayer, D.G., Kinghorn, B.P., and Archer, A.A., 2005

Differential evolution – an easy and efficient evolutionary

algorithm for model optimization Agricultural Systems,

83 (3), 315–328

Meng, T.K and Buffer, C., 1997 Solving multiple response

optimisation problems using adaptive neural networks

International Journal of Advanced Manufacturing

Tech-nology, 13, 666–675

Mike, P and Kemppi, , 1989 Power sources for pulsed MIG

welding Join Materials, June, 268–271

Mitchell, M., 1998 An introduction to genetic algorithms

Cambridge, MA: The MIT Press

Moon, H.S and Na, S.J., 1997 Optimum design based on

mathematical model and neural network to predict weld

parameters for fillet joints Journal of Manufacturing

Systems, 16, 13–23

Nouri, M., Abdollah-zadehy, A., and Malek, F., 2007 Effect

of welding parameters on dilution and weld bead

geometry in cladding Journal of Materials Science and

Technology, 23, 817–822

Olabi, A.G., et al., 2006 An ANN and Taguchi algorithms

integrated approach to the optimization of CO2 laser

welding Advances in Engineering Software, 37, 643–648

Pal, K., Bhattacharya, S., and Pal, S.K., 2009 Prediction of

metal deposition from arc sound and weld temperature

signatures in pulsed MIG welding International Journal

of Advanced Manufacturing Technology, 45 (11–12),

1113–1130

Pal, K., Bhattacharya, S., and Pal, S.K., 2009a Multisensor

based monitoring of weld deposition and plate distortion

for various torch angles in pulsed MIG welding doi:

10.1007/s00170–010-2523-8

Pal, S., Pal, S.K., and Samantaray, A.K., 2008 Artificial

neural network modeling of weld joint strength

predic-tion of a pulsed metal inert gas welding process using arc

signals Journal of Material Proceedings Technology, 202,

464–474

Park, Y.W and Rhee, S., 2008 Process modeling andparameter optimization using neural network and geneticalgorithms for aluminum laser welding automation.International Journal of Advanced Manufacturing Tech-nology, 37, 1014–1021

Slowik, A and Bialko, M., 2008 Training of artificial neuralnetworks using differential evolution algorithm IEEE,HIS 2008, Krakow, Poland, 60–65

Smati, Z., 1986 Automatic pulsed MIG welding MetalConstruction, 38R–44R

Song, Y.A., Park, S., and Chae, S.W., 2005 3D welding andmilling: part II – optimization of the 3D welding processusing an experimental design approach InternationalJournal of Machine Tools & Manufacture, 45, 1063–1069.Storn, R and Price, K., 1995 Differential evolution- a simpleand efficient adaptive scheme for global optimizationover continuous spaces Technical report TR-95-012,ICSI, March 1995, ftp.icsi.berkeley.edu, 1–12

Subudhi, B., Jena, D., and Gupta, M.M., 2008 Memeticdifferential evolution trained neural networks for non-linear system identification 2008 IEEE region 10colloquium and the third international conference onindustrial and information systems, Kharagpur, India, 1–6

Tarng, Y.S., Tsai, H.L., and Yeh, S.S., 1999 Modeling,optimization and classification of weld quality intungsten inert gas welding International Journal ofMachine Tools & Manufacture, 39, 1427–1438

Thamodharan, M., Beck, H.P., and Wolf, A., 1999 Steadyand pulsed direct current welding with a single converter.Welding Journal, 78 (3), 75s–79s

Tong, H., et al., 2001 Quality and productivity improvement

in aluminium alloy thin sheet welding using alternatingcurrent pulsed metal inert gas welding system ScienceTechnology Welding Joint, 6, 203–208

Tseng, H.Y., 2006 Welding parameters optimization foreconomic design using neural approximation and geneticalgorithm International Journal of Advanced Manufac-turing Technology, 27, 897–901

Werbos, P.J., 1974 Beyond regression: new tools forprediction and analysis in the behavioral sciences.Thesis(PhD) Harvard University, Cambridge, MA

Werbos, P.J., 1990 Backpropagation through time: what itdoes and how to do it Proceedings of the IEEE, 78 (10),1550–1560

Zhang, Y.M., Liguo, E., and Walcott, B.L., 2000 Robustcontrol of pulsed gas metal arc welding Journal ofDynamic System, Measure Control, ASME, 124, 1–9

Trang 24

A lean pull system design analysed by value stream mapping and multiple criteria

decision-making method under demand uncertaintyJiunn-Chenn Lu, Taho Yang* and Cheng-Yi Wang

Institute of Manufacturing Information and Systems, National Cheng Kung University, Tainan, Taiwan

(Received 11 May 2010; final version received 19 December 2010)Lean philosophy is a systematic approach for identifying and eliminating waste through continuous improvement inpursuit of perfection, using a pull-control strategy derived from customers’ requirements However, not all leanimplementations have produced such desired results because of not having a clear implementation procedure andexecution guide This article proposes a lean pull system implementation procedure based on combining asupermarket supply with two constant work-in-process (CONWIP) structures that can concurrently considermanufacturing system variability and demand uncertainty in multi-products, multi-stage processes to achieve leanpull system The study uses a multiple criteria decision-making (MCDM) method, using a hybrid Taguchi techniquefor order preference by similarity to ideal solution (TOPSIS) method that takes customer demand uncertainty as anoise factor This allowed identification of the most robust production control strategy to identify an optimalscenario from alternative designs Value stream mapping (VSM) was applied to visualise what conditions wouldwork when improvements are introduced Finally, a real-world, thin film transistor-liquid crystal display (TFT-LCD) manufacturing case-study under demand uncertainty is used to demonstrate and test findings Aftercomparing the current-state map and the future-state map of the case-study, the simulation results indicate that theaverage cycle time reduced from 15.4 days to 4.82 days without any loss of throughput

Keywords: demand uncertainty; lean production; multiple criteria decision making; pull strategy; Taguchi method;TOPSIS

1 Introduction

Lean philosophy is a systematic approach for

identify-ing and eliminatidentify-ing waste through continuous

im-provement in pursuit of perfection, using a pull-control

strategy derived from customers’ requirements (Lian

and Van Landeghem 2007, Khalil et al 2008,

Schaeffera et al 2010) However, not all lean

imple-mentations have produced desired results Most of the

previous lean production researches assumed

produc-tion process is stable and that they ignore process

variability due to the random setup time for a

change-over, random breakdown and the yield loss Such

considerations make existing lean production theory,

can only, benefits the production system without

signi-ficant system variation In addition, we are not aware of

any lean literature that focused on thin film

transistor-liquid crystal display (TFT-LCD) application

Lean manufacturing development requires the

analysis of the value stream, with all constituent

activities – both value added (VA) and non-value

added (NVA) Value stream mapping (VSM) is a

useful tool for solving practical problems This is

especially so when combined with simulation to find an

ideal future-state in a complex real-world ing environment VSM was introduced as a functionalmethod to aid practitioners in rearranging manufac-turing systems (Serrano et al 2009)

manufactur-In this research, by evolving from a push controlsystem to a lean pull-production strategy and lean pull-control system, the following actions are required.First, set the bottleneck stage as the pacemaker, thenintegrate downstream final processes to a continuousflow that pulls customers’ orders, and thus maintains aconstant work-in-process (CONWIP) Second, setup asupermarket in front of the pacemaker operation to act

as a buffer Simultaneously, integrate upstream stations as CONWIP operations to keep the wholesystem as a continuous flow

work-This proposed manufacturing system can beoptimised by determining the upper-level for theCONWIP and supermarket-level of each producttype In order to find the preeminent combination ofcontrollable variables, a multiple criteria decision-making (MCDM) method, using a hybrid Taguchitechnique and the order preference by similarity toideal solution (TOPSIS) method is adopted This uses

*Corresponding author Email: tyang@mail.ncku.edu.tw

International Journal of Computer Integrated Manufacturing

Vol 24, No 3, March 2011, 211–228

ISSN 0951-192X print/ISSN 1362-3052 online

Ó 2011 Taylor & Francis

DOI: 10.1080/0951192X.2010.551283

Trang 25

customer demand uncertainty as a noise factor.

Furthermore, using simulation, we evaluate the most

robust production control strategy that is solved by the

proposed methodology

The remainder of the article is organised as follows

Section 2 reviews earlier studies In Section 3, the

case-study is described, accompanied by details of the

proposed lean pull implementation procedure

Empiri-cal illustrations as discussed in Section 4 The

conclu-sion and future research are addressed in Section 5

2 Literature review

The concept of manufacturing management

encom-passing a lean philosophy was introduced, in the 1980s,

by a research group at Massachusetts Institute of

Technology (MIT), after studying the Japanese style of

production, principally the Toyota Production System

(TPS), in the 1980s (Womack et al 1990) The term

lean production was first used by Krafcik (1988) and

popularised by Womack et al (1990) In the past 20

years, much attention has focused on lean

manufactur-ing (Ranky 2003, Agyapong-Kodua et al 2009,

Browning and Heath 2009) However, most of

previous researchers mainly focused on assembly

manufacturing environment, few researches by using

VSM, a lean tool, to solve machine shop industries, i.e

TFT-LCD manufacturing

The VSM technique introduced by Rother and

Shook (1998) provides a practical guiding-tool for lean

implementation Since VSM was created, an extensive

literature has identified the strengths of lean tools when

applied in the real-world (Sullivan et al 2002,

McKenzie and Jayanthi 2007), and across different

fields (Lummus et al 2006) It has proved effective in

identifying physical details of the manufacturing

process (Braglia et al 2006), and has been introduced

as a functional method to help practitioners rearrange

manufacturing systems (Serrano et al 2009)

It is recognised, however, that when solely using

VSM to implement lean systems, it is not easy to draw

a final decision from different potential scenarios

McDonald et al (2002) used a simulation tool to

evaluate alternative ideal future-state scenarios

Abdulmalek and Rajgopal (2007) extended the use of

VSM to a continuous process in the steel industry

However, their research did not consider sufficient

variability, i.e random setup times, demand

uncer-tainty, etc They assumed that most of these

para-meters were constant In reality, such variability and

complexity are common

The philosophy of lean production focuses on the

requirement of external customers (Womack and

Jones 1996) Demand uncertainty significantly affects

system performance Many researchers have proposed

to deal with demand uncertainty that result from themany variants by suggesting postponement (Skip-worth and Harrison 2006), and using a make-to-stockpolicy (Yang and Geunes 2007) In the past,researchers have done extensive work with regards

to Lean or Pull strategy and have focus on hybridpush/pull systems, Leagile systems (Naylor et al.1999), decoupling points (Mason-Jones et al 2000),postponement (Skipworth and Harrison 2006) How-ever, few researchers have focused on a pull-controlstrategy or lean manufacturing to solve demanduncertainty in high-technology industry Arguably,this is because variability is a significant noise factorfor a pull system: process and demand variability,random breakdowns, random setup time, etc (Stock-ton et al 2005) It is therefore necessary to apply anadequate methodology to find a robust solution thatconsiders these noise factors within a lean productionparadigm

Khalil et al (2008) used discrete event simulation togenerate data so that the estimating models can adoptlean practices into a broader range of productionenvironments However, they only considered caseswhere a single objective was adopted Tong and Su(1997) considered the loss of quality for each response,

to solve the multi-response, robust-design problem,using TOPSIS Their approach explicitly consideredthe sampling variability of each response using theTaguchi quality-loss function MCDM method TOP-SIS, is used to select a suitable option that much of theexisting literature uses to solve manufacturing pro-blems (Yang and Hung 2007, Parkan and Wu 1998)while simultaneously applying a combination ofsimulation to obtain a robust solution for eachscenario (Yang et al 2007b,c, Kuo et al 2008).From the literature review, it is apparent that there

is limited research focusing on the implementation of apull-control strategy in multi-products, multi-stageprocesses considering both internal manufacturingsystem variability and outer demand uncertainty Anexplicit deficit in the literature is the lack of attention

to random features of internal manufacturing meters and external demand uncertainty from real-world applications Moreover, little extant researchproposes a lean implementation procedure (Browningand Heath 2009) for high-technology industry, i.e.TFT-LCD and IC Foundry Based on these require-ments, this article proposes a combining one super-market supply with two CONWIP structures that canconcurrently consider manufacturing system variabil-ity and demand uncertainty in multi-products, multi-stage processes to achieve lean pull system Forovercoming the transition from the current-stateVSM to future-state VSM, this research proposed touse simulation for the performance evaluation Then,

Trang 26

para-the present study uses a MCDM method, using a

hybrid Taguchi and TOPSIS method that takes

customer demand uncertainty as a noise factor to get

robust system parameters of lean pull-production

system Accordingly, the objective of the present study

can be summarised as follows:

(i) to propose a lean pull strategy in solving a

practical case study from TFT-LCD

manufacturing,

(ii) to consider both internal manufacturing

sys-tem variability and outer demand uncertainty

for the study,

(iii) to solve the future-state VSM by simulation

and MCDM methods

3 Proposed methodology

The proposed methodology is shown in Figure 1

The first step identifies VA by current-state and the

VSM highlights the waste and opportunities for

improvement Second, a lean implementation

proce-dure is identified for an ideal future-state map (that is

proposed in the present research, and whose details

will be described in section 3.2.) Third, create a

simulation model for the current-state and for

alter-native future-state mapping Fourth, search optimal

scenario by MCDM, a hybrid Taguchi and TOPSIS

Finally, the ideal future-state VSM is created

3.1 Identify VA from current-state VSMVSM provides a picture of both current-state andfuture-state maps The difference between the current-state and potential future-states is helpful in visualisingwhat conditions would work when improvements aremade (McKenzie and Jayanthi 2007) Current-statemap serves as the basis for developing future-statemaps, which eliminates wasted steps and interfaceswhile pulling resources through the system andsmoothing flow

The modelling data are collected from the facturing execution system (MES) of the case com-pany They include: workstation names, number ofmachines in each workstation, process time, setuptime, mean time between failures (MTBF), mean time

manu-to repair (MTTR), and batch sizes Arena, a cial discrete-event simulator, is adopted for the study(Arena User’s Guide 2009, Kelton et al 2009) Thesetup, MTBF and MTTR are stochastic data Theirassociated theoretical statistical distributions aresolved by the Input Analyser, an embedded function

commer-of Arena

3.2 Implement lean pull-production guidelinesThe presented research proposed a Lean pull imple-mentation guideline that evolved from the originalpush system to an ideal pull system (which is depicted

in Figure 1 and illustrated as following)

3.2.1 Takt time calculationThe rate at which customers purchase product from theproduction plant is the so-called takt time Takt time isused to synchronise the pace of pacemaker process’sproduction with the pace of sales (Rother and Shook1998) The takt time is measured by Equation (1)

takt time¼ available work time per shift=day

customer demand rate per shift=day ð1ÞAfter takt time is calculated, cycle time is set Cycletime is the actual time between completion ofconsecutive units of product or component Thus,takt time should be less or equal to cycle time A goal

of just-in-time (JIT) production is to have takt timeequal to cycle time (Miltenburg 2007)

3.2.2 Pacemaker selection

By using pull system with supermarket in the valuestream, the only scheduling point in the productionsystem is called the pacemaker process This point setsthe pace of production, and ties the downstream and

Figure 1 Proposed experiment procedure

International Journal of Computer Integrated Manufacturing 213

Trang 27

upstream processes together Note that material

transfer from pacemaker downstream to finished

goods is a continuous flow that is controlled by a

customer’s order In reality, it is possible for the

pacemaker and bottleneck to coincide (Serrano et al

2008, 2009)

3.2.3 Continuous flow whenever possible or control by

CONWIP

Continuous flow refers to producing one lot at a time

with no in-process (WIP) between two

work-stations The material transfer from pacemaker

down-stream to finished goods must be a continuous flow

However, most of the production processes have

significant variability, such as the random setup time

for a changeover, random breakdown and the yield

loss compensation requirement of a pure pull-control

strategy, that make one lot at a time to keep each

workstation concurrent and continuous flow becomes

unrealistic

Kanban and CONWIP are the two most

com-monly seen pull strategies (Yang et al 2007a) Kanban

is effective for repetitive manufacturing environments,

but potentially involves with the large number of

Kanban cards (Huang and Kusiak 1996, Paris and

Pierreval 2001) Generally, CONWIP combines the

low inventory level of Kanban with the high

through-put of push – that outperforms the sole use of Kanban

system (Gaury et al 2000, Jodlbauer and Huber 2008)

Consequently, CONWIP is the simplest way to

implement a pull-control strategy; and at the same

time, to face uncertain and dynamic environments

(where Kanban does not perform as well) (Satyam and

Krishnamurthy 2008)

3.2.4 The use of supermarkets to control production

where continuous flow does not extend upstream

In some situations, the continuous flow may be

stopped by processes variability Supermarkets with a

fixed amount of storage, has capacity as a buffer to

absorb variability Invariably, production scheduling

must allow for material availability between two

consecutive runs, so that downstream processes do

not stop working

3.2.5 Level scheduling at the pacemaker

If changeover times were long in the pacemaker, it also

would have been difficult to make pull production by

reducing batch sizes because a substantial amount of

production time would have been lost by changeover

of pacemaker The level scheduling is to construct a

schedule that matches actual production to takt time

and cycle time (Miltenburg 2007) In other words, levelscheduling is a tactical leveling–planning decision toreduce the variability of the production rate, and thuscreates a stable demand stream at the pacemakerprocess, which in turn pulls downstream flow, resulting

in a short lead time (Rother and Shook 1998)

3.3 Develop simulation model: current-state andfuture-state

In real-world practice, decision makers always needmore quantitative evidence to implement lean ideas.Simulation has the capability of demonstrating thebenefits of lean manufacturing throughout the entiremanufacturing system (Detty and Yingling 2000).Simulation is used to model manufacturing processesfor a core product family and to validate the current-state map, as well as evaluating alternative scenarios of

a future-state map These simulation results can enablemanagement to compare the expected performance ofthe lean system from alternative scenarios

3.4 Search optimal scenario by MCDM

To analyse and evaluate optimal parameters for thefuture-state map, a Taguchi experimental design wasplanned for the study Simultaneously, this researchconducts TOPSIS, because TOPSIS is an effectiveMCDM methodology in literature and practice(Yang and Chou 2005); thus, it is used for thepresent study

The main essence of hybrid Taguchi and TOPSIS isthe notion of quality loss transformation The idea ofquality loss is applicable to the present study bytransforming the performance measures into qualityloss functions as follows:

Let Lijbe the quality loss for the jthresponse at the

ithscenario and let yijkbe the simulation result for the

jthresponse at the ithscenario, kthdemand uncertaintyscenario N is the total number of demand uncertaintyscenarios The quality loss functions can then bedefined as shown in Equations (2) and (3) (Tong and

Su 1997):

Lij¼ k1

1N

XN k¼1

for the smaller-the-better response; and

Lij¼ k2

1N

XN k¼1

1

y2 ijk

for the larger-the-better response

The above loss functions were normalised to form performance measures to a ‘larger-the-better’

Trang 28

trans-type of measurement by Equation (4) no matter

‘larger-the-better’ or ‘smaller-‘larger-the-better’ response

where xij(0  xij 1) is the normalised loss function

for the ith response at the jth scenario; Lmax

maxfLi1; Li2;   ; Ling; and Lmini ¼ min fLi1; Li2;

; Ling The resulting xij is a ‘larger-the-better’ type

benefit function

A MCDM problem with m alternatives that are

evaluated by n attributes can be viewed as a geometric

system with m points in n-dimensional space Hwang

and Yoon (1981) developed the TOPSIS based on the

concept that the chosen alternative should have the

shortest distance from the positive ideal solution and

the longest distance from the negative ideal solution

Readers can refer to Yoon and Hwang (1995) for

details The terms used in the algorithm development

are briefly defined as follows:

Attributes: Attributes (Xj, j¼ 1, , 2, n) should

provide a means of evaluating the levels of an

objective Each alternative can be characterised by a

number of attributes

Alternatives: These are synonymous with ‘options’

or ‘candidates’ Alternatives (Ai, i¼ 1, ,2, , m) are

mutually exclusive of each other

Decision matrix: A MCDM problem can be

concisely expressed in a matrix format, in which

columns indicate attributes considered in a given

problem and in which rows list competing alternatives

Thus, an element xij of the matrix indicates the

performance rating of the ith alternative, Ai, with

respect to the jthattribute, Xj

Attribute weights: Weight values (wj, j¼ 1, ,2, ,

n) represent the relative importance of each attribute to

the others

Normalisation: Normalisation seeks to obtain

comparable scales, which allows attribute comparison

The vector and linear normalisation approaches are

two commonly seen methods in the literature The

vector normalisation approach divides the rating of

each attribute by its norm to find the normalised value

of xijas defined in Equation (5) Note that the vector

normalisation approach is for beneficial attributes Let

xj*be the maximum value of the jthattribute, then the

linear normalisation approach divides the ratings of a

certain attribute by its maximum value, as defined in

vij¼ wjrij; i¼ 1; ; 2; :::; m; j ¼ 1; ; 2; :::; n: ð7Þ

Step 3: Identify positive ideal and negative idealsolutions The A* and A7are defined in terms of theweighted normalised values, as shown in Equations (8)and (9)

by the n-dimensional Euclidean distance The tion of each alternative from the positive idealsolution, A*, is then given by Equation (10)

separa-Si ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

Xn j¼1ðvij v

j Þ2

r

; i¼ 1; ; 2; :::; m: ð11ÞInternational Journal of Computer Integrated Manufacturing 215

Trang 29

Step 5: Calculate similarities to ideal solution This is

defined in Equation (12)

Ci ¼ Ci ¼ S

 i

SiþSi ; i ¼ 1; ; 2; :::; m: ð12ÞNote that 0 C

i  1, where C

i ¼ 0 when Ai¼ A-,and Ci ¼ 1 when Ai¼ A*

Step 6: Rank preference order Choose an

alter-native with maximum Ci* or rank alternatives

accord-ing to Ci* in descending order

3.5 VSM: future-state

The proposed VSM highlights and eliminates the source

of waste by using a future-sate VSM and simulation

The future-state VSM will become the roadmap to

achieve a continuous flow system It is achieved by the

fulfillment of the proposed Lean-pull production

strategy in order to have an idea about the required

actions and lean tools to improve the system, such as

takt time setup, demand management, pacemaker

location, etc This map then becomes the basis for

making the necessary changes to the system This allows

verification of the result – and this can help managers to

plan and improve performance index, i.e throughput,

cycle time, and WIP, etc in a short period of time

4 Empirical results

An example of TFT-LCD processes is adopted to

illus-trate the proposed methodology as discussed in section 3

4.1 Identify VA from current-state VSM

In this case-study, a TFT-LCD manufacturing

com-pany at Tainan County in Taiwan is investigated

The Company’s business revenue was more than onebillion US dollars in 2008 The Company buys TFTplates, and applies a color filter (CF) process to scribethe product size The operation comprises the follow-ing steps: inject liquid crystal, seal the panel, bevel toround edge, attach polariser and insert driving-integrated circuit joints Then, it may ship to anoverseas subsidiary or complete the product in thesame factory The two options depend on customers’lead-time requirements and the particular businessmodel The presented case-study concerns a turnkeyproduction line undertaken in a single plant

The next stage is an anisotropic conductive film(ACF) process, followed by flexible print circuit (FPC)bonding, and then an ultraviolet (UV) process toenhance FPC pull-strength resistance The siliconprocess is to enhance the reliability of the product.The assembled process includes all the necessary parts,such as black lights, diffuser and so on, to complete thefinal TFT-LCD module The final inspection processensures the product’s quality to the end customer.Four types of products were selected as experimentmaterial The data were collected from the historicaldata of the company’s MES database The MES defines

a lot size as 30 pieces; and therefore it was selected asthe simulation entity unit Since some of operation data

is confidential, in this study some data have beenmodified to respect confidential proprietary informa-tion of the company The process time for each work-station is summarised in Table 1 and is measured inminutes-per-lot The setup times are shown in Table 2.Both MTBF and MTTR data are shown in Table 3.All data for the current-state map were collectedfrom MES and takes the average data as representativeinformation Figure 2 shows the current-state VSM ofproduct and information flow in the company

Table 1 Process time data

Process time (in minute)

Trang 30

The box symbol in the map represents the

work-station, and each process has a data sheet shown

below, including process time, machine number, setup

time, etc Regarding the demand from the customers, it

is assumed that daily demand is 192 lots This

information is derived from historical order data

from the case company In our simulation model,

there is one product type for each order It is apparent

that the company’s manufacturing environment has

the following features: random setup time for different

product changeover, random break-down, batch

pro-cess and yield loss at three test workstations Such

complexities have been compensated for the turing manager by adopting large amounts of inven-tories to reduce the effects of uncertainties

manufac-4.2 Implement lean pull-production guidelinesThe presented implementation procedure is as follows:

4.2.1 Takt time calculationThere are two yield-loss workstations (Test #2 andfinal inspection) after Chip on Glass (COG) The

Table 2 Setup time data

Workstations

name

Setup time (in minute)

53þ 1Module

Table 3 Machine data

International Journal of Computer Integrated Manufacturing 217

Trang 31

Figure

Trang 32

throughput required for the final products is an

average of 192 lots to fulfill customers’ daily demands

So the takt time at COG to include yield-loss is

approximately:

1440

192= 0:95ð  0:95Þffi 6:77 minutes  per  lot

4.2.2 Pacemaker selection

In this case-study, the bottleneck of COG at the

workstation affects the whole value stream – both the

WIP level and also the takt time

4.2.3 Continuous flow whenever possible or control by

CONWIP

In this research, building a continuous flow from the

pacemaker downstream workstation is unrealistic This

is due to processes variability that is higher than the ideal

manufacturing system, i.e random breakdown at the

FPC workstation These high-variability characteristics

result in the company being unable to setup a pure

Kanban system to maintain a downstream continuous

flow The present research uses a CONWIP design to

replace the individual pull-workstation, which allows

continuous flow along the downstream workstations

4.2.4 The use of supermarkets to control production

where continuous flow does not extend upstream

In this case-study, the continuous flow may be stopped

by the upstream process variability, i.e product

changeover, machine breakdown and batch process

It is therefore necessary to establish a number of WIP

buffers (‘supermarket’) to absorb this production

variability In this research, we proposed using a

CONWIP design to integrate upstream workstations,

ranging from Cutting to Polariser workstation – by

using only one supermarket to maintain the upstream

continuous flow

4.2.5 Level scheduling

According to both, setup time and breakdown time

at each workstation are random and not a constant

value (see Tables 2 and 3) Consequently, level

production times can not be obtained by theoretical

formulation proposed by previous research (Smalley

and Womack 2004) Our research obtained the level

production times by trial and error simulation and

lead to the use of four times of changeover a day

This setup assures the feasibility of the bottleneck

of replications were required to obtain an appropriateconfidence interval The coefficient of variation (CV) is

a measure of the dispersion of a probability tion It is defined as the ratio of sample deviation to thesample mean and is used for the replication decision.The CV chart illustrated as Figure 3

distribu-Under observation, the replication number was 25times to achieve the required performance report

4.4 Search optimal scenario by MCDMThe objective of this step was to find the potential,optimal, future-state map by implementing a lean pull-strategy Since some of operation data are confidential;they have been modified with respect confidentialproprietary information from the company Thepresent study adopts: WIP, cycle time and throughput

as the performance criteria The inventory between thestart and the end points of a production routing iscalled WIP The cycle time is the average time fromrelease of a job at the beginning of the routing until itreaches an inventory point at the end of routing.Throughput is the average output of a productionprocess per unit time (Hopp and Spearman 2008) Thecase-study company try to implement Lean pullproduction to reduce WIP and cycle time, but it isimportant to keep their throughput performance.Note that the use of a discrete-event simulator hasthe flexibility to collect a variety of performancemeasures The present study adopts: cycle time, WIPand throughput as the performance criteria but is notlimited to them For example, service-level andinventory cost is used in literature (Yang et al.2007b) and can be adopted easily when there is a need

Figure 3 The CV chart

International Journal of Computer Integrated Manufacturing 219

Trang 33

The five-level control factors are upstream

CON-WIP upper limit, downstream CONCON-WIP upper limit,

and supermarkets’ upper limit of four-types of

products Demand uncertainty scenario design as an

outer orthogonal array was conducted under the

following conditions The daily demand quantities

were 192 lots per day; for designing demand

un-certainty scenario, this study assumed the daily order

numbers as 8 So, the mean of each order will be

assumed as 24 and 245

6¼ 20 Normally, distributedvariables are the mean of variance 15% and 30%

Then, order arrival time is constant and exponential of

3 h The scenario of demand uncertainty design is as

shown as Table 4

The study possesses six control factors, together

with their respective bounds, and is shown in Table 5

In Table 5, the six factors are denoted as A, B, C,

D, E and F The five levels of all factors were denoted

as 1 to 5 (from low to high level) and there are three

responses as WIP, cycle time and throughput

repre-sented by xi1, xi2 and xi3, respectively The first and

second factors are the first and second WIP upper limit

of CONWIP, respectively Since each product has

different process characteristics, i.e process time and

setup time, the research assumes that the buffer

required for each product type will affect the

perfor-mance of the system Then, the factors C to F are the

supermarket of products A, B, C and D, respectively

The lower bond of each factor is test by simulation that

represented the minimum WIP requirement to achieve

90% daily demand Therefore, an L50 (216 511)

orthogonal array was used to collect the experimental

data Columns 2–7 were adopted to represent the six

control factors

For each experimental scenario, there were nine

scenarios of demand uncertainty to collect proper

response variance data Taguchi’s loss function was

adopted to account for the mean and the variability ofeach response

After the data for each response were obtained, thenormalised quality loss functions were calculated usingEquations (2) to (4) The first step of TOPSIS was tocalculate the normalised rating for xij, i ¼ 1, , m;

j¼ 1, , n The two normalisation methods defined

in Equations (5) and (6) are adopted for this purpose.The experimental results are shown in Table 6.The next step was to calculate the weightednormalised rating using Equation (6) Since through-put performance has the highest priority, it has thehighest weight of 0.6 By assuming that w1, w2, and w3were 0.2, 0.2 and 0.6, respectively, the positive idealand negative solutions, A*and A7, could be found byEquations (8) and (9) Equations (10) and (11)determined the separation measures, Si* and Si7.Finally, Equation (12) found the similarity to idealsolution for each scenario, Ci* According to bothvector normalisation and linear normalisation lead tothe same factor level, in this research the linearnormalisation is omitted The final results using vectornormalisation methods are summarised in Table 7

Ci*, i¼ 1, 2, , 50 were the surrogate responsesfor the proposed problem According to the principles

of the robust design method, if the effects of thecontrol factors on performance and robustness aresummative (that is, if they follow the superpositionprinciple), it is possible to predict the product’sperformance for any combination of levels of thecontrol factors by knowing only the main effects ofthe control factors By using the summative property,the average responses by factor levels can be solved.The same calculation is then applied to all otherfactor levels The resulting factor effects are sum-marised in Table 8 for using vector normalisationmethods The associated factor effect plot is shown as

Table 5 Control variables

Trang 34

Figure 4 Since the effect value is the larger-the-better

type, both normalisation methods led to the same final

parameter design of A2B3C1D1E1F1

By the proposed TOPSIS method, the analyses

showed that the proposed problem is not sensitive to

the two normalisation methods, therefore, we

arbitra-rily chose the vector normalisation method for further

analyses To illustrate the impacts of attribute weights,

the value of w1was varied from 0.2 to 0.6 with step size0.2, and w1þ w2þ w3¼ 1 For each set of attributeweights, we repeated the proposed methodology andthen solved its associated optimal parameter design.The results are summarised in Table 9

Table 9 shows that there were only three differentresults for the six scenarios The divisions occurred at

w3 0.4 and w3 0.6 Therefore, the chosen attribute

Table 6 L50experimental result

Trang 35

weights (0.2, 0.2, 0.6) were optimal According to the

throughput performance, there is slight increase from

187.1 to 191.9 accompanied by throughput weighted

(w3) increase from 0.2 to 0.6 We proposed further

increase w3 to observe the throughput performance

The result is shown in Table 10

Table 10 shows that the system performance and

factors combination cannot be influenced by w3

4.5 VSM: ideal future-state mapActing upon the optimal system factors from thedesigned experiment in section 4.4., the future-statemap were proposed as shown in Figure 5

The results of the future-state map show that thecycle time from pacemaker location to final work-station total 2.08 days, i.e by controlling CONWIP

Table 7 TOPSIS values using vector normalisation

Trang 36

upper limit to 400 lots In addition, the supermarket infront of the pacemaker workstation contains 40 lotsfor each product type This is a total of about 160 lots

of inventory to maintain a continuous flow of thepacemaker The whole system receives one schedule;that was dominated by the pacemaker’s pull mechan-ism Comparison of results between current-state mapand proposed future-state map, using three perfor-mance measurements, is summarised in Table 11.Table 11 shows that the three performancemeasures of future-state It can significantly improve

Table 8 Average TOPSIS value by factor levels using

Note: The optimal design is A 2 B 3 C 1 D 1 E 1 F 1

Table 9 Sensitivity analysis for different attribute weights

Figure 4 Plots for factor effects using vector normalization

International Journal of Computer Integrated Manufacturing 223

Trang 37

Figure

Trang 38

both the WIP and cycle time performances whilst can

maintain the same throughput level

4.6 Scenarios analysis

In this study, by different customer’s demand for

further scenarios analysis, use was made of the

optimum factor-level combination to test the different

scenarios The daily demand quantities were 192 lots

per day; and this study assumed daily order numbers

as 8 So, the mean of each order will be assumed as 24

This research assumed three levels of customer demand

of each order quantity as following: high: 24 6 100%,middle: 24 6 90% and low: 24 6 80% These scenar-ios include two random factors: Normally distributedvariables are mean of variance 0%, 15% and 30%;Order arrival time is constant and exponential of 3 h.The scenario design of high, middle and low demandare illustrated in Tables 12, 13, and 14, respectively.After L50experiment by simulation-optimisation byMCDM, the sensitivity analysis based on Tables 12, 13and 14 are shown in Tables 15, 16 and 17, respectively.The sensitivity analyses aim at keeping thethroughput sufficiently high enough by persuading a

Table 11 Comparing results between current-state map and future-state map

Pacemaker control bottleneck machine setup time 4 times/day.

Table 12 The scenarios of high demand

Table 13 The scenarios of middle demand

Table 14 The scenarios of low demand

Table 15 The sensitivity analysis in high demand scenarios

Trang 39

lower cycle time and WIP During the high demand

scenarios, from the results shown in Table 15, the w1is

suggested to be close to 0.6 The final parameter design

is A2B3C1D1E1F1 In middle- and low-demand

scenar-ios, the throughput can fulfill customers’ requirements

However, then the w1 cannot significantly affect the

combination of factors The optimal solution is

A1B1C2D1E1F1for middle demand For low demand,

the optimal factors are A1B1C1D1E1F1

5 Conclusions

This article proposed a lean pull-production strategy to

implement a pure push-control system that evolves to a

lean-pull-control system By setting the bottleneck

stage as a pacemaker, material transfer from the

pacemaker downstream, to finished goods, is a

continuous flow Concurrently it is proposed that two

CONWIP, combined with one supermarket, is used to

create the whole system as a continuous flow Thus,

finding these optimal WIP of CONWIP parameters,

and the supermarket, is one of our research objectives

In this study, we defined a lean implementation

procedure, leading to a push-control system, and then

to a lean pull-control system This proposed analysis

procedure can be simply extended from traditional

assembly industry to problems that have complex

production variability and demand uncertainty, i.e

TFT-LCD industry This original concept constitutes a

significant contribution to this area of research This

concept dramatically decreases the difficulty of

implementing a pull-control strategy Moreover, itallows system continuous flow by using CONWIP Inturn, it allows a more sophisticated production systemthat can accommodate high variability Finally, themerits of a simulation model were used to explorealternative future-states generated by different re-sponses to these scenarios

The proposed lean pull strategy solved the LCD case that has internal manufacturing systemvariability and outer demand uncertainty The future-state VSM, solved by simulation, and MCDM showedsignificant improvement Accordingly, the presentstudy achieved its objective The contribution of thisstudy lies at the exploration of the lean study for apractical case, which is complex and is not addressed inthe existing literature Furthermore, it proposes aneffective methodology in solving the problem Theempirical results show promise for a practical applica-tion These results can provide important managerialinsights towards the implementation of lean produc-tion in a complex environment

TFT-However, the proposed methodology required asignificant amount of modelling time and experiences

to build a simulation model capable of addressing thecase-study problem This, in turn, becomes a barrierwhen adopting this proposed procedure in solving apractical problem Further researches might seekalternative modelling approaches, and hence reducethis particular modelling barrier

Moreover, future research directions can extend themodel to include detailed factors encompassing design

Table 17 The sensitivity analysis in low demand scenarios

Trang 40

problems, possibly encompassing supplier

considera-tions Detailed reflection, on post-implementation,

could reveal insightful lessons for real-world

improve-ments Future research may also investigate the smaller

moving batch size that are currently a fixed one in

practice

6 Glossary

The following definitions are adopted from Lean

Enterprise Institute et al (2003) and Hopp and

Spearman (2008)

Constant work-in-process (CONWIP): For a given

production line, establish a limit on the WIP in the line

and simply do not allow release into the line whenever

the WIP is at or above the limit

Continuous flow: Producing and moving one item at a

time (or a small and consistent batch of items) through a

series of processing steps as continuous as possible, with

each step making just what is requested by the next step

Just-in time (JIT): A system of production that

makes and delivers just what is needed, just when it is

needed, and just in the amount needed

Kanban: A kanban is a signaling device that gives

authorisation and instructions for the production or

withdrawal (conveyance) of items in a pull system

Level production: Levelling the type and quantity

of production over a fixed period of time

Non-value added (NVA): Any activity that adds

cost but no value to the product or service as seen

through the eyes of the customer

Pacemaker process: Any process along a value

stream that sets the pace for the entire stream

Supermarket: The location where a predetermined

standard inventory is kept to supply downstream

Value-stream-mapping (VSM): A simple diagram

of every step involved in the material and information

flows needed to bring a product from order to delivery

Acknowledgments

The authors thank the anonymous Company for providing

the case study The reviewers provided helpful comments that

greatly improved the manuscript This work was supported,

in part, by the National Science Council of Taiwan, Republic

of China, under grant NSC-98-2221-E-006-100-MY3

References

Abdulmalek, F.A and Rajgopal, J., 2007 Analyzing the

benefits of lean manufacturing and value stream mapping

via simulation: A process sector case study International

Journal of Production Economics, 107 (1), 223–236

Agyapong-Kodua, K., Ajaefobi, J.O., and Weston, R.H.,

2009 Modelling dynamic value streams in support ofprocess design and evaluation International Journal ofComputer Integrated Manufacturing, 22 (5), 411–427.Arena User’s Guide, version 12.0, 2009 Rockwell automa-tion Warrendale, PA: Allen-Bradley

Braglia, M., Carmignani, G., and Zammori, F., 2006 A newvalue stream mapping approach for complex productionsystems International Journal of Production Research, 44(18–19), 3929–3952

Browning, T.R and Heath, R.D., 2009 Reconceptualizingthe effects of lean on production costs with evidence fromthe F-22 program Journal of Operations Management, 27(1), 23–44

Detty, R.B and Yingling, J.C., 2000 Quantifying benefits ofconversion to lean manufacturing with discrete eventsimulation: a case study International Journal ofProduction Research, 38 (2), 429–445

Gaury, E.G.A., Pierreval, H., and Kleijnen, J.P.C., 2000 Anevolutionary approach to select a pull system amongKanban, conwip and hybrid Journal of IntelligentManufacturing, 11 (2), 157–167

Hopp, W.J and Spearman, M.L., 2008 Factory physics NewYork: McGraw Hill

Huang, C.-C and Kusiak, A., 1996 Overview of Kanbansystems International Journal of Computer IntegratedManufacturing, 9 (3), 169–189

Hwang, C.L and Yoon, K.P., 1981 Multiple attributedecision making: methods and application New York:Springer-Verlag

Jodlbauer, H and Huber, A., 2008 Service-level performance

of MRP, kanban, CONWIP and DBR due to parameterstability and environmental robustness International Jour-nal of Production Research, 46 (8), 2179–2195

Kelton, W.D., Sadowski, R.P., and Sturrock, D.T., 2009.Simulation with Arena New York: McGraw Hill HigherEducation

Khalil, R.A., Stockton, D.J., and Fresco, J.A., 2008.Predicting the effects of common levels of variability onflow processing systems International Journal of Compu-ter Integrated Manufacturing, 21 (3), 325–336

Krafcik, J.F., 1988 Triumph of the lean production system.Sloan Management Review, 30 (1), 41–52

Kuo, Y., et al., 2008 Using simulation and multi-criteriamethods to provide robust solutions to dispatchingproblems in a flow shop with multiple processors.Mathematics and Computers in Simulation, 78 (1), 40–56.Lean Enterprise Institute, Marchwinski, C., and Shook, J.,

2003 Lean lexicon : a graphical glossary for lean thinkers.Brookline, MA: Lean Enterprise Institute

Lian, Y.H and Van Landeghem, H., 2007 Analysing theeffects of lean manufacturing using a value streammapping-based simulation generator International Jour-nal of Production Research, 45 (13), 3037–3058

Lummus, R.R., Vokurka, R.J., and Rodeghier, B., 2006.Improving quality through value stream mapping: a casestudy of a physician’s clinic Total Quality Management,

17 (8), 1063–1075

Mason-Jones, R., Naylor, B., and Towill, D.R., 2000 Lean,agile, or leagile? Matching your supply chain to themarketplace International Journal of Production Re-search, 38 (17), 4061–4070

McDonald, T., Van Aken, E.M., and Rentes, A.F., 2002.Utilising simulation to enhance value stream mapping: amanufacturing case application International Journal ofLogistics Research and Applications, 5 (2), 213–232

International Journal of Computer Integrated Manufacturing 227

Ngày đăng: 19/07/2016, 20:17

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm