1. Trang chủ
  2. » Luận Văn - Báo Cáo

International journal of computer integrated manufacturing , tập 23, số 11, 2010

103 373 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 103
Dung lượng 10,71 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Knowledge value chain: an effective tool to measure knowledge valueYang Xu* and Alain BernardIRCCyN, Ecole Centrale de Nantes, Nantes, FranceReceived 21 September 2009; final version recei

Trang 2

Knowledge value chain: an effective tool to measure knowledge value

Yang Xu* and Alain BernardIRCCyN, Ecole Centrale de Nantes, Nantes, France(Received 21 September 2009; final version received 19 May 2010)

Knowledge value is a significant issue in knowledge management, but its related problems are still challenging Thispaper aims at discussing how knowledge value changes in the knowledge evolution process and develops aknowledge value chain (KVC) to measure knowledge value By applying the notions of knowledge state andknowledge maturity, the knowledge finite state machine (KFSM) and knowledge maturity model (KMM) areintroduced to characterise the KVC Based on these concepts, knowledge value is measured by calculating thedifference between two maturity states rather than by direct calculation This point of view of knowledge value, theconstruction of KVC and the association of knowledge value and knowledge maturity are insightful for bothresearchers and practitioners

Keywords: knowledge management; knowledge value; value chain

1 Introduction

Nowadays, more and more enterprises and

entrepre-neurs realise that knowledge plays an important role in

business success and that knowledge management is

becoming a core activity The capacity of knowledge

management becomes a crucial issue for companies,

and essential for enterprise competitiveness (Bernard

and Tichkiewitch 2008)

However, although ‘knowledge is power’ was

spelled out more than 400 years ago (Bacon 1597), it

is still easier said than done, and people can hardly

control and measure knowledge as they can electrical

or mechanical power Therefore, there is a growing

need to represent this ‘power’ in an explicit way and

specify the process during which this ‘power’ works

Before companies became aware of the importance

of knowledge management, knowledge activities were

usually ill-defined, and as a result, knowledge

innova-tion, application and abandonment mostly happen

without rigid control One of the main purposes of

knowledge management is to standardise and

forma-lise knowledge changing and transmission processes,

so as to follow the rules concerning knowledge and to

control knowledge in order to improve production

activities (Leonard-Barton 1995) As tacit knowledge,

which exists in the form of mental models, beliefs,

experience or other forms of know-how of individuals,

it is not easy to convey in formalised patterns and is

usually represented and stored using storytelling

(Guerra-Zubiaga and Young 2008)

The backbone of knowledge management is toensure the availability of knowledge to all people orcases that may require it ‘Availability’ means notonly an extensive, living and sharable knowledgebase accessible to all users within an organisation,but also the probability and ease to satisfy require-ments, and the degree of satisfaction In recent years,many researchers have made important contributions

to measure such ‘satisfaction’ Ahn and Chang(2004) have introduced a KP3methodology to assessthe contribution of knowledge to business perfor-mance and they have established logical linksbetween knowledge and business performancethrough product and process concepts Chen et al.(2009) have integrated analytical network processesand balanced scorecards to measure knowledgemanagement performance, so as to compare anorganisation’s knowledge management performancewith its rivals and to improve its knowledgemanagement activities Bernard and Xu (2009) havedeveloped an integrated knowledge reference system

to describe the knowledge evolution process inproduct development and show the mutual value-adding process between knowledge and product.Wen (2009) has constructed a model to measureknowledge management effectiveness by using focusgroups, analytic hierarchy process and questionnaireanalyses Liu et al (2005) have made some empiricalsurveys to compare the effectiveness of differentknowledge management systems

*Corresponding author Email: Yang.Xu@irccyn.ec-nantes.fr

Vol 23, No 11, November 2010, 957–967

ISSN 0951-192X print/ISSN 1362-3052 online

Ó 2010 Taylor & Francis

DOI: 10.1080/0951192X.2010.500677

Trang 3

As knowledge is inherently difficult to measure,

former researchers mostly assess the outcomes

asso-ciated with knowledge, such as the performance of a

knowledge management system or how knowledge

could contribute to business performance, instead of

measuring knowledge directly There is, therefore, a

lack of direct focus concerning the problem of

knowl-edge value, which is a key point and a bottleneck in

knowledge management To achieve this goal, this

paper will discuss issues of knowledge value and how

knowledge could be evaluated and acted upon in

production activities

2 Knowledge value

When talking about ‘knowledge is power’, intuitively,

people are aware of the fact that knowledge can make

things better Furthermore, when saying ‘better’, it can

be ‘a little better’ or ‘much better’, so people are eager

to know ‘to what extent are things better’ As a result,

a measurement should be introduced Unfortunately,

people think that a measurement is one of the most

difficult parts of the knowledge management field

(Ruggles 1998) and Liebowitz and Wright (1999) even

stated that it was not clear whether knowledge could

be measured

In order to challenge this issue, the primary

problem is, above all, ‘to measure what’ This paper

proposes the term ‘knowledge value’ as what is to be

measured in considering ‘how powerful knowledge is’

Consequently, we come to the question: what exactly is

‘knowledge value’?

‘Value’ is a flexible term that is used in a variety of

domains with different meanings For example, in

economics, it means the market worth or estimated

worth of commodities, services, assets or work; in

mathematics, the output of a function is called the

value; in psychology, value explains why people prefer

or choose some things over others, i.e the explanation

of an individual’s preferences in life goals, principles

and behavioural priorities (Renner 2003) From these

different meanings of ‘value’, we may conclude that

‘value’ describes how people positively or negatively

evaluate things and concepts, and the reasons used in

making their decisions

Based on the fundamental meanings of ‘value’, this

paper uses the term ‘knowledge value’ to characterise

‘the ability of knowledge that can make things better’

From the point of view of knowledge lifecycle

(Birkinshaw and Sheehan 2002), at the beginning,

knowledge cannot always meet the requirement, which

is determined by a given context Knowledge has to

experience an evolution process to augment its ability,

i.e its value, so as to arrive at a state which can meet

the requirement and solve the problem The notion of

‘maturity state’ is thus introduced to represent such astate and it integrates the knowledge itself within thecontext

Before starting the survey on knowledge value andknowledge maturity, some insightful arguments oninformation value and information maturity should bepresented, as knowledge is usually linked to informa-tion and information is a concept that is similar toknowledge in some points of view

Sillince (1995) argued that information valuedepends on the probability of information transferfrom one person to another within an informationsystem and is affected by whether such a transfer iscostless or not By modeling three types of information(related information, alternative information andunrelated information), some useful formulations arededuced, which make is possible to discover whichorganisational or market forms are most able to deliverhigh information value through empirical testing.Zhao et al (2008) clearly distinguished informationvalue from information quality and defined it as

‘(Benefits of having information) / (Resource spending

on storing and retrieving)’ which is equal to ‘(QualityRelevance) * Saving’ Then they developed anassessment system integrating information character-istics, the Bayesian Network theory, and conditionalprobability statistical data to evaluate informationvalue For the concept of information maturity, theMeta Group proposed a five-level model of informa-tion maturity, which enables organisations to assesstheir information management practices (MIKE 2010)and Blanco et al (2007) presented a tool calledPIQUANT to illustrate an information maturitymanagement model which can manage informationtypes within different workspaces during the designprocess

Knowledge is not simply information, so thenotions of knowledge value and knowledge maturitydiffer from the concepts introduced above, and theywill be introduced in detail in the following sectionswith explicit and formal definitions

2.1 Features of knowledge valueKnowledge is different from traditional resources,having many features which make it difficult to judgeits value (Stewart 1997) For example, knowledge can

be used without being consumed and some knowledgecan only be sold once For traditional power such aspetrol, sales personnel can calculate how much isconsumed by clients and take this parameter as anindex in evaluating its value But for knowledge powerthat can only be sold once, such as an idea, people canhardly evaluate its value by estimating how much it

is expected to be used For example, the value of

Trang 4

software depends on the number of times it is used

once bought and different clients may use it with

different frequencies Sometimes some frequencies can

be obtained, such as the ‘impact factor’ of a scientific

paper or journal, but when such a ‘frequency’ or ‘the

number of times’ is not available, it will be more

practical to consider ‘knowledge value’ as the value it

provides each time it is used instead of its ‘whole’ value

that the knowledge could provide in its lifecycle

Another feature is that the values of different kinds

of knowledge are, for the most part, not comparable,

for example, how can we compare the value of

‘specialised’ knowledge (e.g the formula of diet pills)

and ‘common’ knowledge (e.g doing exercises to keep

fit)? Both of them are useful for a similar purpose and

may have a same effect Although people would have

to pay more to buy diet pills than jogging along a river,

we cannot say ‘common’ knowledge is less valuable

than ‘specialised’ knowledge But in reality, why

should we always pay more for ‘specialised’

knowl-edge? It is not because ‘specialised’ knowledge is more

valuable than ‘common’ knowledge but because it

costs more, and people may think something that costs

more is more valuable, which might not be true In

practice, admittedly, cost and time both have a great

impact on knowledge evaluation, as knowledge should

always be acquired within the financial budget and a

tolerable time delay, and conflicts between planned

cost/time and actual cost/time may result in some risks,

so a comprehensive investigation should be based on

value/cost/risk This paper mainly focuses on the issues

of value, and further research will take cost and risk

into consideration

The third feature is that knowledge value can be

estimated only when knowledge is used and can hardly

be judged in advance For example, when we go to a

lecture or take a course, we are not able to answer the

questions ‘Is it valuable to you? How much value could

it bring to you?’ before we have heard the content

Sometimes, statistical studies based on experience or

other methods might be used to expect the value of an

object, but it is not the case for knowledge, because

judgments from other people (or other cases) may vary

greatly from each other As a result, ‘knowledge value’

can not be ‘predicted’, and it is defined on condition

that objectives are already known In other words,

knowledge value should be simulated within a context

that is known

From these features, which are still far from covering

all features of knowledge value, we may reach the

following conclusions when defining knowledge value:

Knowledge value is the value of its one-time use

Knowledge value does not have a direct link with

Moreover, knowledge could exist in both explicitand tacit forms, or as Chilton and Bloodgood (2008)pointed out that knowledge has ‘its degree of tacitness’

As explicit knowledge, is codified and formally stored

in specific media, people may find some numericalparameters to characterise a given aspect, for example,people can use ‘bit’ to measure the quantity of a kind

of explicit knowledge such as electronic documents oreven audio-visual files The Impact Factor (IF)founded by the Institute for Scientific Information(ISI) is also a well-known parameter to evaluate theimportance of a specific type of explicit knowledge –published scientific papers On the other hand, tacitknowledge can hardly be represented objectively.People are not often aware of the fact that they areusing the tacit knowledge they possess, so it is moredifficult to recognise how valuable it is Tacit knowl-edge is quite personal, as it can only be gained throughpersonal experience and transferred through personalcontacts People may have their own judgement of it:whether they possess it, whether they are using it,whether it is important in helping them to carry outcertain actions or tasks, and more confusingly, whether

it is useful and valuable for others

As a result, the objective or absolute measurementsare actually very limited in ‘measuring’ knowledgevalue We may even say that knowledge does not have

a philosophically absolute value that is independent ofthe context, such as a stone of 5 kg which is always

‘5 kg’ regardless of the time, the site or the actor.Furthermore, it is also difficult to capture the ‘out-come’ of knowledge activities directly and objectively

as people could easily fail to extract real causalrelationship between a knowledge activity and ‘its’outcomes, so we should sometimes rely on parametersthat are subjective, depending on individuals,

Trang 5

organisations and cases Thus, we may conclude that

knowledge value should always be studied and

measured in context

2.2 Knowledge value in context

The proposition that knowledge has value is ancient,

and arguments about knowledge value emerged

thousands of years ago Pritchard (2007) has

analysed several problems concerning knowledge

value, including the primary value problem for

knowledge (the Meno problem concerning why

knowledge is more valuable than mere true belief),

the secondary value problem (concerning the issues

of why knowledge is more valuable than any proper

subset of its parts) and the tertiary value problem

(why knowledge is of more value to us than

whatever falls short of knowledge) Despite

philoso-phical analysis of knowledge value, nowadays,

knowledge value is also recognised in a business

context, for example, a pair of shoes is worth more

than leather plus rubber, and it is knowledge that

augments the value of products in this

manufactur-ing process This example indicates a clear fact that

knowledge value is revealed in the process of product

development, thus, this paper limits the studies of

knowledge value to within such a context

Definition 1 The knowledge context is characterised by

attribute Oi

Intuitively, a knowledge context could be regarded

as a super-cube which characterises the environment of

knowledge activities, and it may have different axes in

different cases

In order to explain how this knowledge context

space can be concretely applied, this paper has chosen

three main aspects: participants (PA), knowledge

status (KS) and product lifecycle (PL)

Thus, we have:

C¼ PA  KS  PL; ci¼ fpai; ksi; plig

where ci2 C is a specific context, and pai2 PA, ksi2

KS, pli2 PL

Each set has several values, i.e elements, and they

are illustrated as follows:

(1) PA¼ {director, manager, engineer, technician,

operator}

PA refers to the human factor Knowledge is athing which is always linked to people, and differentpeople may regard the same piece of knowledgedifferently For example, given a piece of knowledgethat describes the selling strategy of a competitor, themanager of an enterprise may regard it as crucial butthe operators working in the manufacturing unit maythink it useless PA has five elements, i.e values, whichare derived from the common hierarchical structure of

a company All providers, users and workers ofknowledge are treated as ‘human factors’, and theycorrespond to one of these five elements When ahuman factor is assigned one value, it does not merelyrefer to the name of the job, but the different levels ofpoints of view and functions, from a strategic outline

unsystemati- Orderedunsystemati- The knowledge has already been stored

in text files and in formal forms, but at a relativelylow integration degree and data redundancyexists For example, the numerical data ofcollected questionnaires belong to this status Organised The knowledge is structured logically,and can be maintained and managed by effectivemechanisms Organised knowledge abstracts thecore information of knowledge Results aftersynthesis, classification and calculation belong tothis status, e.g., the average income per family inarea X is $60,000 per year, the average number ofcars owned per family in area Y is 1.2, etc Usable The description and organisation aboutknowledge is user-oriented, in other words,usable knowledge is rather descriptive and isable to describe phenomena which have beenanalysed and concluded according to the desires

of users E.g., people of area X tend to use efficient cars rather than pursuing high perfor-mance in speed, young people of area Y are moreinterested in car design than car size, etc Intelligent Knowledge and production activitiesare integrated comprehensively, and knowledgeacts as a motivating power with a certain degree

fuel-of intelligence General conclusions and tions for decision making belong to this status.Intuitively, when knowledge is abstract and de-scriptive it is close to the ‘Intelligent’ side and when it is

Trang 6

sugges-more likely to be in code or digital forms it approaches

the ‘Initial’ side KS may seem to be similar to the

data-information-knowledge-wisdom (DIKW)

hierar-chy (Rowley 2007), but the difference is that, when

presenting DIKW, people mainly aim to explain how

to understand those literally abstract concepts (they do

not exist physically) in a more comprehensive way,

using illustrations, examples or metaphors People may

be interested in exploring how the chain is constructed

and in clarifying the fuzzy transitions between different

concepts For example, Hey (2004) examined the

transitions between data, information and knowledge

which link them as a DIKW chain Similarly, ‘initial’

knowledge seems like ‘data’ and ‘intelligent’ knowledge

has a sense of ‘wisdom’ However, this paper

emphasises how knowledge is organised, rather than

the relationship between different knowledge statuses

There is another important difference between the

description of knowledge status and DIKW chain The

DIKW chain implies an evolution process, in other

words, wisdom is better or more advanced than data

In our proposition of knowledge status, the five

elements are ‘equal’, that is to say, for example,

‘intelligent’ knowledge status is not always superior to

‘ordered’ status

(3) PL¼ {information gathering, design, development

and testing, manufacturing, sales, service}

PLlinks the evolution of knowledge with product

development In product development, knowledge is

inseparable from a product, so the product lifecycle

should be introduced For example, a product design

does not have the same purpose in the realisation

phase as in the selling phase Currently, the concept of

product lifecycle no longer emphasises just financial

matters in an enterprise applying for business planning

and management It has been more broadly used as an

engineering term to describe a comprehensive

ap-proach in managing enterprise performance (Ma and

Fuh 2008) It is usually integrated with knowledge

management (KM) methods especially in dynamic and

collaborative environments (Thimm et al 2006) and

different stages of a product lifecycle have different

knowledge requirements (Xu and Bernard 2009)

The axis of PL enables the integration of product

lifecycle management (PLM) within the context

PLM is the process of managing the entire lifecycle

of a product and it integrates people, products and

processes to form an all-encompassing system that

can provide companies with an overview of product

development Typically, PLM aims at improving

product development processes and involves activities

such as information gathering, conception, design,

manufacturing, sales and services Figure 1 shows the

wheel of PLM

The phase of Information Gathering mainlyincludes investigation, data collection and ana-lysis, etc

The phase of Design mainly includes requirementspecification, product definition, general concep-tion, detail design, embodiment, etc

The phase of Development and Testing mainlyincludes prototype simulation, acquirement andadjustment of technical parameters, productvalidation, etc

The phase of Manufacturing mainly includesproduct manufacturing, assembly, packing, etc The phase of Sales mainly includes advertising,selling, product/service delivery, etc

The phase of Service mainly includes nance, after sales support, product retirementand recycling, etc

mainte-PA, KS and PL are the three axes chosen in thispaper to characterise the knowledge context in thefollowing studies These three are chosen to perform as

an example of how the knowledge context space can beimplemented and other different attributes can bechosen to characterise specific contexts Not all theother attributes can be treated in exactly the same way

as PA, KS and PL are, because different attributeshave different possible values, but the idea ofconstructing a knowledge context space is the same:the knowledge context is characterised by Cartesiancoordinates consisting of several attributes and ele-ments of each attribute correspond to given points onthe axis

2.3 Knowledge stateThe notion of state is usually applied to simplifyproblem characterisation by dividing a continuousFigure 1 The wheel of PLM

Trang 7

developing process into discrete stages When former

researchers describe knowledge activities by knowledge

states, they usually build the knowledge value chain

model with a linear series of knowledge stages and

steps (Lee and Yang 2000, Wong 2004) However,

those models are limited in their linear structure and

when different stages may have ambiguous boundaries,

they will bring uncertainty as well As a result, this

paper proposes a knowledge finite state machine

(KFSM) to represent the knowledge evolution process

Definition 2 A knowledge finite state machine (KFSM)

is a hextuplehQ, S, K, d, s0, Fi, where:

Q is a finite and non-empty set of knowledge

states si;

S is a finite and non-empty set of manipulations

required to change the knowledge states;

K is a finite and non-empty set of knowledge

required to change the knowledge states,

includ-ing two subsets Ka(knowledge available, namely

the knowledge existing in the knowledge base)

and Ki(knowledge imported, namely the

knowl-edge that needs to be acquired from the

outside); the knowledge fragments are noted by

k, and k2 K

d is the state transition function: d: Q 6

S 6 K! Q, and when a transition from si to

siþ1 happens with the ‘right’ manipulation and

knowledge, it is called an effective state

transition;

s0is an initial state, which is an element of Q;

F is the set of final states, which is a subset of Q,

and there is at least one state sn2 F

The KFSM provides us with a method to describe

the knowledge evolution process in a more flexible and

general way When a knowledge evolution process

starts with s0 and ends up with sn, the aim of KM

activities is to eliminate the difference between s0and

sn, and this target can be accomplished by bridging the

gap between siand siþ1step by step

2.4 Knowledge maturity

When characterising an evolution process, the notion

of maturity is usually referred and used to describe the

varying states of a thing during its development

Maturity has different meanings in different domains,

such as:

Biology The age/stage when an organism can

reproduce

Geology A measurement of a rock’s state in

terms of hydrocarbon generation

Psychology A person responds to their stances or environment in an appropriate man-ner, being aware of the correct time and place tobehave and knowing when to act in serious ornon-serious ways

circum- Software engineeringcircum- To what extent is itplanned how to do things, mainly described bythe capability maturity model (CMM) (Paulk

et al 1995)

Inspired by these interesting applications of theterm ‘maturity’, we will introduce the maturity ofknowledge based on the notion of knowledge state

As knowledge is context sensitive, given a edge state, it may be less mature in one situation thananother For example, a mathematical theory is maturefor a professor (as the knowledge is ready for reuse insolving some problems), when it is not mature enoughfor a pupil (as the knowledge must be illustrated bysome simple means so that the pupil may understandand apply it) Consequentially, the knowledge maturitydescribed in this paper associates the knowledge withthe context, and is defined as follows:

knowl-Definition 3 Knowledge maturity describes the state ofknowledge within a specific context, noted as:

3.1 Knowledge maturity model

In fact the subject of a knowledge maturity model(KMM) has already incited interest among research-ers, and they have conducted insightful surveys onKMM For example, Markow (2004) proposed aKMM consisting of four levels: the Process cycle,Roles, KIM (Knowledge Insight Model), and Innermechanism; Robinson et al (2006) constructed aknowledge management maturity roadmap of fivestages: Start-up, Take-off, Expansion, Progressive,and Sustainability

The main idea of these existing KMM mainlycomes from CMM, which is mostly applied in the field

of software engineering CMM was originally intended

Trang 8

as an objective evaluation and served as a tool to

measure the performance of various software

engineer-ing contractors It measures the organisation’s current

state by providing a set of goals, a checklist indicating

what the organisation should accomplish to reach a

higher level and is now one of the most recognised

models in industry Through a number of applications

and experiences, it has been shown to be well-suited for

organisations when characterising their key processes

The basic idea of CMM is a five-level ladder

consisting of the initial, repeatable, defined, managed

and optimised levels, and the existing KMM is also

based on this idea of ‘level-ladder’ Here are several

limitations of the structure of ‘level-ladder’ when

describing knowledge activities in product

development:

The direction of maturity development is always

‘up’ According to CMM, we can say that when

the company is at level 3, it is more mature than

when it is at level 2, and the company’s aim is

always to progress from a low level to a higher

level, i.e the direction is always ‘up’ However,

does knowledge always go ‘up’? For example, the

mathematical formula ‘xþ y ¼ y þ x’ is the

induction of ‘1þ 2 ¼ 2 þ 1, 3 þ 5 ¼ 5 þ 3,

etc.’ and it can describe a general mathematical

law, thus, it is supposed to be at a ‘higher’ level

and more mature than ‘1þ 2 ¼ 2 þ 1, 3 þ 5

5þ 3, etc.’ But when this knowledge is expected

to be taught to pupils in their first year of

primary school, ‘xþ y ¼ y þ x’ should be

transferred to ‘1þ 2 ¼ 2 þ 1, 3 þ 5 ¼ 5 þ 3,

etc.’ In this case, the direction of knowledge

evolution is ‘down’ and knowledge at a lower

level is more mature

All tasks of one level should be accomplished in

order to go to the next In CMM, the companies

should climb the ‘ladder’ level by level; however,

as knowledge is an active thing, can it not

transfer its maturity status by ‘leaping’ or

‘making a detour’? For example, some people

may follow this sequence: ‘have an idea -4 write

it down -4 realise it’, but others can make a

direct leap from ‘idea’ to ‘action’

The higher level overlays the information of the

lower level But for knowledge, is the knowledge

in a ‘less mature’ status no longer useful? In the

traditional understanding about maturity levels,

the information of a less mature status is overlaid

by a more mature status For example, in the

constructing process of a manufacturing system,

from the traditional view point, the result (a

system that is working) is more mature than

design drafts However, knowledge contained in

the design drafts could be also valuable, such asknowledge about ‘why Part X is designed likethat’, but the manufacturing system itself doesnot communicate this knowledge: in otherwords, the knowledge is overlaid by the resultwhich is more mature Given a same manufac-turing system, different participants need differ-ent knowledge For engineers whose duty is tomaintain the system, they need the knowledgeabout ‘why Part X is designed like that’, but fortechnicians who are focusing on operating thesystem, what is useful for them is knowledgeabout ‘how to use Part X’

In order to overcome the limitations above, theKMM introduced in this paper implies the idea ofmulti-dimension rather than linear structure Asdefined in Section 3.2, knowledge maturity is notonly determined by the knowledge state itself, but also

by its context Together with KFSM introduced inSection 3.1, the KMM will serve as a base for theknowledge value chain

3.2 Knowledge value chainThe concept of value chain was described andpopularised by Porter (1996) as a value-adding process

in which an organisation might engage Based on thisunderstanding, more researchers have continued toimprove the knowledge value chain (KVC) with theirown emphasis Holsapple and Singh (2001) introduced

a knowledge chain model comprising five primaryactivities that an organisation’s knowledge processorsperform in manipulating knowledge resources, withfour secondary activities that support and guide theirperformance King and Ko (2001) proposed aninformation/knowledge value chain which is based onthree important levels at which value enhancingactivities are conducted Eustace (2003) developed it

as a model that integrates different perspectives fromvarious interest groups Carlucci et al (2004) modelledKVC as a series of stages of KM Wang and Ahmed(2005) developed a KVC which incorporates eighttypes of KM processes and five kinds of KM enablers.Those outstanding researchers mainly organisedthe knowledge value chain by arraying different levels,describing the stages of an organisation’s activities,from acquiring knowledge to using it However,existing knowledge value chains are mainly descriptive.This paper differs from former perspectives as itregards the knowledge value chain as a sequentialflow of knowledge in which knowledge value increases,and thus aims at proposing a model based on theknowledge value chain to measure knowledge valueand survey the mutual impact between knowledge and

Trang 9

product It is in an explicit form to describe the

knowledge evolution process By applying KVC in

knowledge activities, critical nodes of the evolution

process could be revealed and different evolution

‘paths’ could be compared to achieve an optimised

solution

This paper characterises KVC with KFSM and

KMM, and Figure 2 shows a KVC ‘s0 ! s1 ! s2

‘which characterises the knowledge evolution process

in a 3-dimension super-cube

By instantiating the elements of the three axes with

values, an example is set up to illustrate the KVC

informa- o1 ¼ engineer, o2 ¼ organised, o3 ¼ conception

o1 ¼ manager, o2 ¼ usable, o3 ¼ sales

The KVC ‘s0 ! s1!s2 ‘ represents the following

process:

(1) In the information gathering stage, data

analysts (technician) process the data coming

from investigations by transformation,

filtra-tion, calculafiltra-tion, etc., then the knowledge

gathered in the market is transferred to the

expected selling quantity of cars

(2) Based on the results of marketing investigation

s0, the production schedule s1is proposed, e.g.,

in the next season, 10,000 cars with a 1.6L

engine and 20,000 cars with a 2.0L engine will

be produced

(3) The selling plan s2 is established by the

manager according to previous results

In this KVC, the selling plan s2is the final state, asthe aim of the investigation is to help optimise theselling plan It should be noted that the selling plan isinfluenced by a production schedule as well, becausethe actual production capability of an enterprise islimited so the ideal selling plan can not be obtained.Thus, s1 is a critical node of the KVC It is alsopossible that the KVC might have different criticalnodes which form different ‘paths’ Furthermore, KVCcould be split into several sub-KVC according topractical needs

In a KVC, a knowledge state siis considered to bemore mature than sj when its maturity states mi is

‘closer’ to the final maturity state mn, namely itsdistancejmi– mnj is smaller Especially, mnis regarded

as completely mature This distance is related to theknowledge value and will be discussed in the followingsection

4 Knowledge evaluation based on knowledge valuechain

Based on the KVC introduced above, we have a briefidea about knowledge value: knowledge value in-creases during its evolution process and the value isgreater when the knowledge is ‘closer’ to the end(target) The notion of knowledge value is defined asfollows:

Definition 4 Knowledge value represents the edge evolution degree in a KVC, noted as V(si).Generally, V(s0)¼ 0, V(sn)¼ 1

knowl-Unlike length with ‘meter’, information with ‘bit’,

or price with ‘dollar’, there is still not a specific unit

to assess knowledge value, because people have notyet found a metric by which knowledge value can beadded physically For this reason, this paper doesnot propose a unit either, but uses the percentage tomeasure the degree of knowledge evolution Given

an initial state s0 and a final state sn to arrive, there

is a distance jsn7s0j, and knowledge value is thepower to ‘travel the path’ (we may take as ametaphor petrol that is providing power to a carwhen it is travelling on the road) As there is nophysical unit measuring knowledge value, the usage

of percentages is suitable A simple hypothesis ofmeasuring knowledge value using percentage is that:

if knowledge k1 can ‘make the travel further’ than

k2, then k1 has a higher value than k2, e.g., if k1canachieve 70% of the path jsn7s0j and k2 can achieve50%, then k1 has a higher value

As proposed above, KVC is a framework tocharacterise knowledge evolution in a given context,and therefore it can serve as a base to describe andmeasure knowledge value By associating knowledgeFigure 2 A KVC based on KFSM and KMM

Trang 10

evolution with knowledge maturity, knowledge value

could be measured by knowledge maturity, in other

words, when knowledge maturity states change from

mi to miþ1, knowledge values change from V(Si) to

V(Siþ1)

The procedure of calculating the difference of two

maturity states, namely jmi7mjj, is as follows:

(1) Supposing n indicators are considered in

calculating jmi7mjj As an example, let us

take n¼ 3, and the three indicators chosen are

financial cost f, the consumed time t and the

risk r To unify the three indicators, the

calculating formulae are as follows:

(a) The financial cost f¼ [(money spent in this

step)/(the total cost of the lifecycle)]6 100%

(b) The consumed time t¼ [(time spent in this

step)/(the total time of the lifecycle)]6100%

(c) The risk r ¼ (17abg) 6 100%, where a, b

and g are calculated as follows:

(i) Suppose x1, x2, x3are the incremental

steps of the elements in PA, KS and

PL respectively, and the ‘incrementalstep’ means the number of step-by-stepchanges E.g the incremental step of

‘technician to manager’ is 2; theincremental step of ‘usable to orga-nised’ is 1; the incremental step of

‘design to service’ is 3

(ii) Suppose the success rates of one

leaping degree of the elements in PA,

KSand PL are y1, y2, y3respectively

The success rates indicate that riskshappen in states changes As we know,when knowledge states are changing,the unexpected may happen Differentreasons may cause yi5 1, such as:

(1) For PA: incomplete efficiency ofexecution, different understand-ings, etc

(2) For KS: loss of data, calculationerrors, semantic differences, etc

(3) For PL: deviation during theproduct lifecycle evolution, e.g

incomplete realisation of thedesign

(iii) a¼ yx1

(2) The difference between the two maturity states

is the weighted mean of f, t and r:

or their experience For example, when a knowledgestate changes from ‘organised’ to ‘usable’, we shouldmap ‘people under 35 years old’ to ‘young people’ Inthis case, such mapping may bring some risks caused

by different semantic contexts: some people of 40 yearsold may be also considered as ‘young people’ in somecontexts, while not in all Experienced experts willassign y2an appropriate number according to the factthat whether the mapping process will bring big orsmall risks to the case The weights are assignedaccording to different strategies or purposes, forexample, in the production of an airplane, or will begiven a big value such as 0.8, whereas ofand otwill beassigned small values

This paper has chosen f, t and r as three indicators

to calculatejmi7mjj, and they are chosen according toMalmi and Ika¨heimo’s (2003) analysis about valuemanagement (VM) It should be noted that otherindicators could be introduced according to specificcases as well People may choose n indicators andassign weights o1, o2, , on respectively, on condi-tion that Sni¼1oi¼ 1 More generally, we obtain:

mi mj

 ¼

Pni¼1ðoi indicatoriÞPn

When we take n¼ 3 and instantiate indicator1 withfinancial cost, indicator2 with consumed time andindicator3 with risk, we arrive at the procedureintroduced above

Given two maturity states miand mj, if a knowledgefragment k (or human resource, or knowledge activity,etc.) can complete the states transition mi !mj, thenaccording to Definition 4, V(k)¼ jmi7mjj Thus, if thenewly introduced knowledge k can make the maturitystate evolve from mi to mj, its has a value ofV(k)¼ jmi7mjj As an example that we have tested,given f¼ 0.6, t ¼ 0.75, r ¼ 0.3, and the weightsassigned are of¼ 0.4, ot¼ 0.4, and or¼ 0.2, thenV(k)¼ 0.6 This 0.6 means ‘the ability of knowledge k

to evolve sito sj’ In a global view, ‘sito sj’ is just onepart of ‘the initial state s0to the required state sn’, andthis part is worth ‘60%’ of the whole If V(k)¼ 1, itmeans that k is sufficient to achieve the ‘whole path’and if V(k)¼ 0, it means that ‘k’ is completely useless

By qualitative analysis, we find that knowledge has ahigher value when it can fill the gap between twomaturity states that have higher financial cost, longerconsuming time and higher risk

For any knowledge involved in production ities, V(k) could serve as a benchmark to measureknowledge value and judge the financial quotations

Trang 11

activ-from different knowledge providers such as consulting

companies or outsourcing services For example, two

knowledge providers may offer knowledge k and k0,

whose values could be measured by V(k)¼ jmi7mjj

and Vðk0Þ ¼ jm0i m0jj respectively With the KVC

developed in this paper, the value of these two different

knowledge solutions can be compared Moreover,

when there are several ‘paths’ from s0 to sn with

different critical nodes (c.f Figure 3), different

solu-tions can be compared quantitatively, thus KVC could

play an important role in cost management

5 Discussions

‘Knowledge is power’, but how can the ‘power

provider’ be characterised and how can the ‘power’

be represented? In order to answer these questions, this

paper focuses on the problem of knowledge value,

which is used to describe ‘the power of knowledge’

‘Value’ is a flexible term that has abundant meanings

and is applied in a variety of domains To avoid

terminology inflation, this term should be given its

proper meaning in the specific context of knowledge

management For this basic purpose, several

defini-tions are given so as to make the problem of

knowledge value a practical job rather than a

theoretical game

Attempts at measuring knowledge value directly

usually cause irrational assumptions, as it is still an

unproved proposition as to whether knowledge is

something that can be measured directly It is well

known that people are using the electron cloud, the

probability density of an electron, to describe the

movement of an electron around an atomic nucleus So

we can infer that some indirect descriptions may also

be a good choice to characterise something that is of

great complexity and flexibility, for example,

knowl-edge As a result, instead of focusing enterprise

attention on measuring knowledge value itself, thispaper regards knowledge value from a KVC viewpoint.The KVC proposed in this paper is based on anintegration of KFSM and KMM, which are intro-duced to characterise knowledge and knowledgecontext in an explicit way respectively To make thestudy more accessible, this paper has chosen three

‘dimensions’ which are well recognised in dailyproduction activities: participant, knowledge statusand product lifecycle

In fact, with rapid technological development,companies need to rely on knowledge outsourcing toremain competitive (Gavious and Rabinowitz 2003),and knowledge value measured and compared by KVCcan serve as a benchmark in evaluating differentknowledge providers and thus act as a support toenterprise decisions

Interesting perspectives for the study of knowledgevalue may include: further analysis of other featuresand attributes of knowledge and knowledge context,development of different indicators and weights of themeasurement procedure according to different types ofindustries, etc

6 ConclusionsThis paper begins by analysing some features ofknowledge and knowledge value and proposes severalbasic notions about knowledge such as knowledgestate, knowledge context, knowledge maturity andknowledge value They are then formalised andintegrated into a structured and explicit approach tocharacterise the knowledge value chain Finally, based

on the knowledge value chain, a procedure to measureand compare knowledge value is introduced Thismethod of measuring knowledge value based on KVCmay not solve all the problems involved in the field ofknowledge management; however, it does provide atemplate for a knowledge reference system and makesthe application of knowledge measurement moreeffective

ReferencesAhn, J.H and Chang, S.G., 2004 Assessing the contribution

of knowledge to business performance: the KP3dology Decision Support Systems, 36 (4), 403–416.Bacon, F., 1597 Meditationes Sacræ

metho-Bernard, A and Tichkiewitch, S., 2008 Methods and toolsfor effective knowledge life-cycle-management Berlin:Springer

Bernard, A and Xu, Y., 2009 An integrated knowledgereference system for product development CIRP Annals,

58 (1), 119–122

Birkinshaw, J and Sheehan, T., 2002 Managing the edge life cycle MIT Sloan Management Review, 4 (1), 75–83

knowl-Figure 3 Different solutions in a KVC

Trang 12

Blanco, E., Grebici, K., and Rieu, D., 2007 A unified

framework to manage information maturity in design

process International Journal of Product Development, 4

(3–4), 255–279

Carlucci, D., Marr, B., and Schiuma, G., 2004 The

knowl-edge value chain: how intellectual capital impact on

business performance International Journal of

Technol-ogy Management, 27 (6/7), 575–590

Chen, M.Y., Huang, M.J., and Cheng, Y.C., 2009

Measur-ing knowledge management performance usMeasur-ing a

com-petitive perspective: An empirical study Expert Systems

with Applications, 36 (4), 8449–8459

Chilton, M.A and Bloodgood, J.M., 2008 The dimensions

of tacit and explicit knowledge: a description and

measure International Journal of Knowledge

Manage-ment, 4 (2), 75–91

Eustace, C., 2003 A new perspective on the knowledge value

chain Journal of Intellectual Capital, 4 (4), 588–596

Gavious, A and Rabinowitz, G., 2003 Optimal knowledge

outsourcing model OMEGA: The International Journal

of Management Science, 31 (6), 451–457

Guerra-Zubiaga, D.A and Young, R.I.M., 2008 Design of a

manufacturing knowledge model International Journal

of Computer Integrated Manufacturing, 21 (5), 526–539

Hey, J., 2004 The Data, Information, Knowledge, Wisdom

Chain: The Metaphorical link Intergovernmental

Ocea-nographic Commission December 2004, Paris, France

Holsapple, C.W and Singh, M., 2001 The knowledge chain

model: activities for competitiveness Expert Systems

with Applications, 20 (1), 77–98

King, William R and Ko, Dong-Gil, 2001 Evaluating

knowledge management and the learning organization:

an information/knowledge value chain approach

Com-munications of the Association for Information Systems, 5

Article 14

Lee, C.C and Yang, J., 2000 Knowledge value chain

Journal of Management Development, 19 (9), 783–794

Leonard-Barton, D., 1995 Wellsprings of Knowledge

Bos-ton: Harvard Business School Press

Liebowitz, J and Wright, K., 1999 Does measuring

knowl-edge make ‘cents’? Expert Systems with Applications, 17

(2), 99–103

Liu, S.C., Olfman, L., and Ryan, T., 2005 Knowledge

management system success: empirical assessment of a

theoretical model International Journal of Knowledge

Management, 1 (2), 68–87

Ma, Y and Fuh, J.Y.H., 2008 Product lifecycle

model-ling, analysis and management Computers in Industry, 59

(2–3), 107–109

Malmi, T and Ika¨heimo, S., 2003 Value Based Management

practices—some evidence from the field Management

Accounting Research, 14 (3), 235–254

Markow, T.T., 2004 A knowledge maturity model: anintegration of problem framing, software design, andcognitive engineering Thesis (MSc), North Carolina StateUniversity, USA

MIKE, 2010 Retrieved from: http://mike2.openmethodology.org/wiki/Information_Maturity_Model

Nonaka, I and Takeuchi, H., 1995 The knowledge creatingcompany New York: Oxford University Press

Paulk, M.C., Weber, C.V., Curtis, B., and Chrissis, M.B.,

1995 The capability maturity model: guidelines forimproving the software process Boston: Addison–WesleyLongman Inc

Porter, M.E., 1996 What is strategy? Harvard BusinessReview, 74 (6), 61–78

Pritchard, D.H., 2007 Recent work on epistemic value.American Philosophical Quarterly, 44, 85–110

Renner, W., 2003 Human values: a lexical perspective.Personality and Individual Differences, 34 (1), 127–141.Robinson, H.S., Anumba, C.J., Carrillo, P.M., and Al-Ghassani, A.M., 2006 STEPS: a knowledge managementmaturity roadmap for corporate sustainability BusinessProcess Management Journal, 12 (6), 793–808

Rowley, J., 2007 The wisdom hierarchy: representations ofthe DIKW hierarchy Journal of Information Science, 33(2), 163–180

Ruggles, R.L., 1998 The state of notion: knowledgemanagement in practice California Management Review,

40 (3), 80–89

Sillince, J.A.A., 1995 A stochastic model of informationvalue Information Processing and Management, 31 (4),543–554

Stewart, T.A., 1997 Intellectual Capital: The New Wealth ofOrganizations New York: Doubleday/Currency.Thimm, G., Lee, S.G., and Ma, Y.S., 2006 Towards unifiedmodelling of product life-cycles Computers in Industry,

57 (4), 331–341

Wang, C.L and Ahmed, P.K., 2005 The knowledge valuechain: a pragmatic knowledge implementation network.Handbook of Business Strategy, 6 (1), 321–326

Wen, Y.F., 2009 An effectiveness measurement model forknowledge management Knowledge-Based Systems, 22(5), 363–367

Wong, H.K., 2004 Knowledge value chain: Implementation ofnew Product Development System in a Winery TheElectronic Journal of Knowledge Management, 2 (1), 77–90

Xu, Y and Bernard, A., 2009 Knowledge organizationthrough statistical computation: A new approach.Knowledge Organization, 36 (4), 227–239

Zhao, Y., Tang, L.C.M., Darlington, M.J., Austin, S.A., andCulley, S.J., 2008 High value information in engineeringorganisations International Journal of Information Man-agement, 28 (4), 246–258

Trang 13

Computer integrated reconfigurable experimental platform for ergonomic

study of vehicle body design

Z.M Bi*

Department of Engineering, Indiana University Purdue University Fort Wayne, Fort Wayne, IN 46805, U.S.A

(Received 4 January 2010; final version received 18 May 2010)

A computer integrated reconfigurable experimental platform is introduced for the ergonomic studies of vehicle bodydesign Its hardware system consists of lights and camera equipment, and most importantly, many movable partswhose positions and orientations can be adjusted to meet various requirements of ergonomic study The movements

of these parts are driven by actuators Its software components are a control system, a visualisation system, and adata acquisition system The control system is operated by human beings to issue the motion commands for systemreconfiguration, the visualisation system monitors the reconfigurable platform in the reconfiguration process andergonomic studies, and the data acquisition system acquires all meaningful data in the integrated experimentplatform Those data include the positions and all movable parts, the status of lighting and camera equipment, andtrajectories and postures when a test driver enters or exits the platform These data can be utilised to optimise thevehicle exterior components to meet the ergonomic needs of different drivers

In this paper, the virtual control and the validation of system set-up are mainly discussed System architecture ispresented, and the designs of hardware and software components are introduced Two graphical user interface(GUI) systems have been developed for the control and the visualisation of the reconfigurable experimentalplatform The visibility study is conducted to ensure a type of motion tracking equipment is appropriate to collectthe data of entering and exiting movements by a driver during experiments

Keywords: virtual reality; systems design; system simulation; sensors; automated inspection reconfigurable platform;ergonomics; avatar; data acquisition

1 Introduction

A survey has shown that every year there are around

15,000 slip/fall injuries to the drivers of medium or

large-size vehicles About half of the falls happened on

tractors, mostly during entering or exiting processes

Some main features relating to the causes of the

injuries include steps (57%), handholds (7%), and

ground (20%) These injuries have caused a significant

economic loss; one large US fleet reported that the

slips/falls around vehicles directly incur over $20

million (Parkinson et al 2007, Reed 2009) From the

perspective of vehicle body design, it is very important

to study the relation between the human variability

and appropriate geometrics and positions of vehicle

exterior components; note that the ergonomic study is

an essential means to evaluate the performance of a

specific design solution

The importance of ergonomics study for successful

product design has been well recognised Numerous

researches have been conducted to address various

issues relevant to ergonomics The relation between

automation and productivities and ergonomics was

investigated through a case study, and it was suggested

that the integration of the consideration of ergonomicsinto production system design can increase the pro-ductivity of a partially automated system significantly(Neumann et al 2002) A similar conclusion was drawn

by Yeow and Sen (2003) In their study, the ergonomicdesign brought significant reduction of cost, reduction

in rejection rate, increased monthly revenue, and theimprovement of productivity and quality The inter-ventions implemented were simple and inexpensive butresulted in many benefits An early review (Scapin andBastien 1996) has discussed the progress of ergonomicstudy for product design and summarised someergonomic criteria for evaluations of the ergonomicquality of interactive systems Lee (1999) argued to use

a high-touch process to incorporate innovative ideasinto the development of ergonomic products andbacked up his arguments with many case studies.For ergonomics study of vehicle design, tradition-ally, designers need to build a stationary, three-dimensional mock-up model from wood and metalbased on the engineering drawings for examinationand test Each round of design changes would requireanother physical model to be built, adding time and

*Email: biz@ipfw.edu

Vol 23, No 11, November 2010, 968–978

ISSN 0951-192X print/ISSN 1362-3052 online

Ó 2010 Taylor & Francis

DOI: 10.1080/0951192X.2010.500678

Trang 14

cost to the development program Moreover, it takes a

long time to build a specific model For example, a

typical seating buck might take more than six weeks to

build and most often is already out of date by the time

it is finished The data collection about the exact

positions of vehicle components is another issue; the

set-up and the data collection are mainly accomplished

manually Numerous required experiments take a long

time, a considerable amount of labour, and still are

very error-prone It is desirable to develop a computer

integrated experimental platform (physical model or

simulator) to simulate entering/exiting conditions

under various design parameters for medium or heavy

duty vehicles

One reconfigurable programmable platform is

com-mercially available but its application is confined to the

design of passenger cars and interior design It has been

named as the programmable vehicle model (PVM) for

Ford (Prefix Corp., 2009) A designer can change the

dimensions of the interiors and look at some options

for placing seats and controls The PVM works similar

to 3D mock-up models except the PVM allows larger

surfaces, such as roof sections, door panels and

instru-ment panels, to be moved in and out with the help of

hydraulics Once the dimensions are established, data

can be transferred from the device to a computer (Kobe

1995) Note that only the adjustments of internal

com-ponents are implemented and they are inapplicable to

the ergonomic study of medium or heavy duty vehicles

owing to instrumentation and variability needs For the

design of medium to heavy duty vehicles, a few

manually-operated experimental platforms have been

used for ergonomic study (Reed 2009); while no

automated reconfigurable platform has been reported

Computer-aided virtual or physical experimental

platforms for ergonomic study are not new Many

relevant researches have been published in literature

Rubio et al (2005) have reviewed the state of the art of

virtual reality (VR) for the next-generation

manufac-turing; VR is stated as a low-cost, secure and fast

analysis tool and it is a very helpful and valuable work

tool for the simulation of manufacturing systems A

VR-based system was developed to support the

analysis of information requirements for the

die-casting industry (Bal et al 2008) Pappas et al (2006)

proposed a web-based platform for collaborative

process and product design evaluation; this platform

provides real-time collaboration of multiple users at

different sites on the same project A virtual assembly

approach was proposed to model assembly processes

(Jun et al 2005) Mavrikios et al (2006) developed a

prototype system which deployed VR for immersive

and interactive simulation of welding processes

Wiendahl and Fiebig (2003) developed a

computer-aided tool for factory design; it took into consideration

the different viewpoints of all specialists involved in theplanning The application of VR technologies has beenextended in many other areas In the life-cycle design

of shoe products, the VR technology was used as asupport tool to increase the design efficiency (Vigano

is expected to (i) reduce the potential possibility of theinvestment waste due to inappropriate design, (ii)justify the design suitability and feasibility vividly andcost-efficiently, (iii) accelerate the detailed design, (iv)evaluate the solution of using a motion tracking system

as one of the means for data acquisition, and (v)facilitate the accuracy of estimating required resourcesand components at the phases of the detailed design,fabrication and ergonomic study

The objectives of the presented works are to (i)develop a computer integrated reconfigurable experi-mental platform for vehicle exterior design, (ii)implement the motion control system, and (iii) validate

if it is feasible to apply a motion tracking equipment totrack drivers’ motion in ergonomic studies Therefore,the following three tasks are mainly involved: (i) todesign a conceptual platform and visualise it in a VRenvironment, (ii) to develop graphical user interface(GUI) for motion control and visualisation, and (iii) toconduct the visibility study for the motion trackingsystem The remainder of the paper is organised asfollows In Section 2, architecture of the reconfigurableexperimental platform has been introduced Systemcomponents and their relations have been discussed InSection 3, the virtual model of reconfigurable platformhas been developed, and the design considerations forthe determination of system components are explained

In Section 4, two GUI systems are introduced Thefirst one is to control the physical model and thesecond one is to visualise the reconfigurable processand ergonomic behaviours in real-time In Section 5, avisibility study has been discussed The process ofpreparing CAD files with different avatar modelsunder three different poses has been introduced Amotion tracker model has been embedded with thevirtual platform for visual validation In Section 6, thesystem reconfiguration process has been discussed InSection 7, the presented works have been summarised,and the limitation and future works are pointed out

2 System architectureSystem architecture of the developed reconfigurableplatform is shown in Figure 1 The system consists of

Trang 15

six modules: the physical model, the virtual model, the

data acquisition system, the Labview-based control

system, the Eclipse-based visualisation system, and the

embedded database

The physical model consists of movable

compo-nents and their supporting frames; a set of the

actuators are installed to drive these movable

compo-nents The virtual model is a mirrored computer

representation of the physical model For hardware

development, it is recommended to use as many

off-the-shelf components as possible, their computer

models are often available from suppliers, and those

models can be directly imported and assembled with

others in the virtual model The data acquisition

system obtains the statuses of the movable

compo-nents, and tracks the movements of a test driver during

the test Encoders are used to locate the positions of

linear actuators; to track the movement of a test driver;

a driver has to wear a suit with the markers A set of

the cameras are installed to capture the light reflections

from the markers and thus detect the motion trajectory

of a test driver when he/she enters or exits the vehicle

The Labview-based control system is developed to

control the actuators of the physical model; in

comparison, the Eclipse-based visualisation system is

to monitor the working condition of the reconfigurable

experimental platform

At the end, an embedded database is included inthe system It keeps the data relevant to all systemcomponents such as the virtual model, the profiles andthe statuses of test drivers, the set-up of movablecomponents, the set-up of lights and camera, and theresults of ergonomics studies After the experiments,designers can retrieve any types of data for the designoptimisation of a vehicle

3 Virtual model of hardware systemThe hardware system consists of lights and cameraequipment, and most importantly, many movableparts whose positions and orientations can be adjusted

to meet various requirements of ergonomic study Themovements of these parts are driven by actuators Asshown in Figure 2, the main components include threesteps, door and B-pillar handles, side door, driver’sseat, pedals, and steering wheel column The virtualmodel has been developed to detect a potentialcollision in the operation of movable components

3.1 Design considerations

A total of 34 motions have been identified from designrequirements; however, each movable componentrequires a simple and de-coupled linear or rotationalmotion Numberless linear and rotational actuatorsare commercially available To design these movablecomponents cost-effectively, the number of the custo-merised components have to be minimised by adopting

as many commercially available products as possible.The design is focused on the selection of availableproducts which can meet the motion ranges, accuracy,and loads for the specified components optimally.The strategy of ‘divide and conquer’’ is applied Thecomponents are selected or designed to meet theirmotion ranges individually After the adjustable

Figure 1 Architecture of reconfigurable platform

Figure 2 Virtual model of the hardware reconfigurableplatform

Trang 16

components are determined and placed, the geometric

dimensions of the support frames can be finally

designed to hold all components firmly on the platform

The designed components will be assembled as a whole

to verify whether or not (i) the structure has an enough

space to place all components, and (ii) there are some

collisions when a component is under its adjustment

The design has been an iterative process to generate a

final solution, where all of the components are arranged

appropriately and there is no collision in operation

Note that some components, in particular the steps,

have an overlapping motion range in their height

adjustments The potential collisions are unavoidable

in the design However, this type of collision does

not affect the application since they are predictable

The avoidance of collision is achieved easily by the

controlling program SoildWorks is used as the main

tool for the conceptual design of reconfigurable

plat-form The CAD models of commercial products have

been converted into the formats such as IGES, STEP,

and VRML, which are acceptable to Unigraphics,

Envision, and Mockup In the following sections, the

designs of some movable components are provided as

examples

3.2 Examples of component design

The design of steps As shown in Figure 3, various steps

can be selected in vehicle design, the appearances and

geometrics of the steps have an impact on the

ergonomic behaviours The steps are required to adjust

their lateral (X) and vertical (Z) positions

automati-cally In the Z direction, the adjustment ranges are

overlapping with each other The design specificationsalso imply that the number of the steps in use can bechanged from one to three based on the height of thecab floor relative to the ground floor An extrarequirement for the steps is that they have to behidden under the cab floor when they are not in use.Three design options for automatic height adjustmentsare tested, and a final solution in Figure 2 is selectedbased on the comparison among the alternatives.Design of Handles Handles are the most diversifiedcomponents in vehicle design Figure 4 has shownsome examples of the handle designs It is very difficult

to define a general requirement of various handles tocover all of the ergonomic experiments The designmeets the requirements of three types of the handles,i.e., B-pillar handle, upper handle, and inside inclinedhandle The B-pillar handle is mounted on the movingpart of the slide door Its position is determined by theslide door, and its vertical position (Z) can be changedmanually A commercially available lock is availablefrom Bosch to fix the position when the adjustment iscompleted The inclined handle can be mounted at anyposition and orientation on the right door with its twoends screwed into pre-fabricated holes The upperhandle is mounted horizontally, and its two ends aremounted on the left and right door panels, respectively,

to increase its rigidity All of the handles adopttelescopic parts for the changes of their lengths.Design of Wheel Column The steering wheel column isadjustable in three directions: the fore/aft (Y) direc-tion, the vertical direction (Z), and the orientation of

Figure 3 Various steps in application (Source: Reed 2009)

Figure 4 Various handles in application (Source: Reed 2009)

Trang 17

the wheel Two design challenges are its potential

interference with the pedals support and its strengths

to endure the large force and moments applied on the

wheel by a driver To meet these challenges, parallel

structures are considered

As shown in Figure 5, four vertical profile columns

are mounted rigidly on the platform cab floor as the

rails for a Z-plate to move along the Z-direction The

motion is provided by a linear actuator whose motor is

grounded on the cab floor The Z-plate consists of four

inside profile columns and connected the closed rails by

four slides, respectively On the top side of the Z-plate,

there are two horizontal profile beams; each connects

two vertical columns together rigidly These two beams

are used as the rails for the movement in the

Y-direction Another linear actuator is mounted on the

Z-plate as a provider of the motion in the Y-direction

The adjustment of the wheel orientation is depicted

in Figure 6 A cylinder actuator is used to drive the

motion, its fixed end is mounted on the Y-plate by a

revolute joint, and its moving end is connected to the

wheel link by a revolute joint The wheel link is also

connected to the Y-plate by a revolute joint The wheel

is welded on the wheel link rigidly The cylinder

actuator will drive the orientation change of thesteering wheel

Design of the Pedals The pedals can be adjustedmanually or automatically in all three directions Threepositioning systems are for clutch, accelerator, andbrake Once a pedal is adjusted, the scale and pointerattached on the positioning slide tells us the position ofthe pedal

4 Graphic user interfaceThe objective of developing a graphic user interface(GUI) is to control the physical model and visualise thevirtual model The adjustments can be made based onthe control commends specified by an operator TheGUI has been developed for both the physical modeland the virtual model

4.1 Labview-based control systemLabview from National Instruments (NI) is one of themost popular systems for motion control, and it has amodule called ‘Mechatronics Toolkit (NI Corp 2009),which is designed to develop complex multi-axismotion profiles for machines and validate them usingsimulation The toolkit is capable of designing motionprofiles, detecting collisions, simulating the mechanicaldynamics of a machine including mass and frictioneffects, estimating machine cycle time performance,validating component selections for motors, drives andmechanical transmissions, and evaluating engineeringtradeoffs between the mechanical, electrical, controland embedded system aspects of the design

Labview Mechatronics Toolkit has its graphicmotion simulation in SolidWorks Therefore, thevirtual model can in fact be connected to thecontrolling system directly The motion constraints ofthe components in the virtual model are defined byassembly mates The Mechatronics Toolkit provides acommunication channel to allow a Labview GUI toaccess (read and write) all of the parameters used inthese assembly mates As a result, the Labview GUI iscapable of manipulating the positions of adjustablecomponents by changing the coordinates of thecomponents in the assembly As shown in Figure 7,the Labview GUI has been developed to control thephysical model

It is ideal that the Labview Mechatronics Toolkitcan support the real-time visualisation of the reconfi-gurable platform; but unfortunately, the developedLabview GUI has a considerable time delay in thegraphic simulation Two key reasons are the complex-ity of the virtual model and discontinued communica-tion between the Labview and Solidworks Whenever a

Figure 5 Design of wheel support

Figure 6 Mechanism for orientation change of steering

wheel

Trang 18

set of new data is sent to Solidworks, its Cosmosmotion

Module has to be activated repeatedly to create a

simulation of the motion for it, and an intensive

calculation could incur The good side of the Labview

GUI is that in the control of the future physical

reconfigurable platform, the GUI controller

commu-nicates directly to the actuators instead of Solidworks

Therefore, the developed Labview GUI can only be

used in the control of the physical model, and an

independent visualisation program is required

4.2 Eclipse-based visualisation system

The unsatisfactory performance of visualisation of

Labview-based control system led to the development

of an independent Eclipse-based visualisation system

It has been based on the similar technologies for the

remote control and monitor of machining systems

through Internet (Bi et al 2007)

To visualise the operation of the reconfigurable

platform, the first step is to generate Java compatible

models (such as VRML, IGES, and STL) for all

components of the platform from their Solidworks

models, and these VRML models will be used to create

a graphic simulation The second step is to define the

assembly of the motion components in Java 3D, in

which the corresponding motion parameters are

determined to describe all relative motions among the

components of the assembly The final step is to

develop the GUI to manipulate the values of the

motion parameters The GUI has been developed in

the Eclipse Environment

As shown in Figure 8, a simplified platform model

is used for the demonstration The mouse-driven slides

are created to manipulate the positions and

orienta-tions of the components, and each actuated motion

corresponds to a slide A user can change the position

of a component by drawing the position of itscorresponding slide In addition, the desired platformconfiguration can be saved in the embedded database,

or a saved configuration can be retrieved from thedatabase easily The graphic simulation achieves asatisfactory responding speed

5 Visibility study5.1 Preparation of avatars with markersAvatar models with different sizes for both male andfemale are needed for the visibility study As shown inFigure 9, the system library in the Envision softwareincludes all of the digital models of these avatars Forboth male and female, there are 5th, 50th, and 95thpercentile avatars, respectively

A motion tracking system has been proposed to beused in the ergonomics study of the reconfigurableplatform A motion tracking system calculates anobject’s motion by identifying and recording themotion trajectories of the targeted markers attached

on the object

Therefore, a set of the markers have to be attached

on the avatars for the visibility study There are manypossible combinations in placing markers on an avatar

We follow the guide for the standard marker ment (ETC 2009) to minimise the number of markers

place-in modellplace-ing Based on this guide, the names ofmarkers have been cited in Figure 10

All six avatar models have been attached with themarkers in appropriate positions, and Figure 11 shows

an example of the P50 male avatar with 42 markers inhis digit model

5.2 Preparation of entering/exiting poses

In the Envision software, an avatar model is integratedwith the virtual buck model, and the motion joints of

Figure 7 Labview-Based Control for the Physical Model

Figure 8 Eclipse-based visualisation system for the virtualmodel

Trang 19

the avatar model, as well as the motion components of

the virtual model are adjusted to create different poses

of the avatar on the platform The investigated poses

include an avatar (i) on the step, (ii) on the door frame,

and (iii) partially seated A total of 18 VRML models

have been generated for six types of avatars and each

type of avatar with three different poses Figure 12 has

shown an example of the female P5 avatar located on

the virtual model in three different poses

5.3 Preparation of tracking camera positionsThe tracking camera configurations provided are based

on parameters of the Motion Analysis Corporation con Analogue system Other systems will have varyingparameters, but the fundamental principles remain truefor any optical tracking system Each Falcon cameraachieves optimal balance between capture volume andaccuracy, for our needs, at a range of 4 meters withFigure 9 Avatar models in envision

Fal-Figure 10 Marker placement guide (ETC2009)

Trang 20

standard 25 mm markers The field of view of each

camera is 55 degrees horizontal and 40 degrees vertical

With the physical performance of the cameras

known, planning camera positions can begin

The goal is to place the cameras in positions that

will allow accurate and complete motion data to be

collected The following guidelines constrain the

placement of cameras:

At least two cameras must see a marker for it to

be tracked Visibility in three cameras provides

optimal accuracy An occluded marker can only

be compensated for temporarily, if it remains

hidden from multiple cameras, tracking of that

point will be lost until it re-enters the field of view

Cameras should be placed as to see the actor

clearly and have as little obstruction as possible

For example, the buck door can block the line ofsight to many markers Cameras should be placed

so that the door position does not affect theirvisibility of the actor On occasion this is not pos-sible, so plan that one camera can see the actorwhen the door is open, and another camera cansee without obstruction when the door is closed Cameras should be spaced apart as evenly as thecapture volume allows If cameras are too closetogether, marker position calculations throughtriangulation becomes less accurate Well spacedcameras also provide more complete coverage ofthe capture volume

Three possible configurations have been generated

5.4 Visibility validation in mockupEach camera has its field of view plotted in the 3Dmodel In mockup, the user can navigate to the point ofview of the camera and see what the camera can see.The user can also move to any position and select thedesired field of view The selected field becomes high-lighted, making it very clear where the camera can andcannot see Sample positions of male and female actors

of varied stature have been provided to help define thecapture volume The 3D model is checked for:

Areas hidden from the cameras Sufficient multi-camera overlap How the buck’s structure and actor’s body canhide markers during entry/egress

Each of these factors affects the precise positioning ofeach camera

5.5 Conclusion of visibility studyThe cameras can be positioned to capture data from allmarkers Flexibility in the positioning of the camerasalso allows the system to compensate for changes inFigure 11 Example of the avatar with markers in envision

Figure 12 Three poses in entering the reconfigurable platform

Trang 21

the design Since the actor and the virtual platform

model share the capture volume, it is inevitable that

some markers will become hidden from multiple

cameras momentarily This effect can be minimised

with good camera placement, and compensated for

using some of the following techniques that have

proven effective in past projects

‘Virtual Join’ is the term used by Motion Analysis

to describe a technique of joining markers

together in groups in such a way that if a single

marker is hidden, its position is confirmed by the

relation to the surrounding markers

‘Virtual Markers’ are positions that are

calcu-lated from multiple real markers In this way

stable and accurate tracking data is collected

even when markers are occluded It is similar to

‘Virtual Join’, except that it does not require a

real marker

The above techniques work in real-time, ing the capture data as it is collected Once thedata is saved, post production techniques can beused to patch any remaining gaps Each markerhas an x, y and z position in space As timepasses, the accumulation of these positionscreates curves When there is a gap in the data,these curves can be extrapolated and producevery accurate estimation of the marker positionover small gaps

improv-With good camera set-up, proper marker links andattention to post processing when necessary, accurateand valuable motion data can be collected

Trang 22

are required to obtain sufficient data Each experiment

corresponds to a specific set-up of the movable

compo-nents on the reconfigurable platform The flowchart in

Figure 13 illustrates the reconfiguration process when

the requirements of an ergonomic study are given

It is ideal to keep all of the tested configurations in

the database When a new test is initialised, a

com-parison is made between new requirements and the

configurations of the existing tests If the requirements

are totally new, a new configuration has to be created

and inserted in the database, and an operator has to

input desirable positions for all moveable parts and the

positions of lights via the Labview-based GUI system

Otherwise, the portion of the established procedure

and data of a similar experiment can be imported to

reconfigure the platform When the desirable positions

of system components are ready, the control system

issues the motion commands to relocate system

components accordingly After all system components

are at their desirable positions, the ergonomic studies

on test drivers can be conducted Since it is very

important to take into consideration the variation in

age, height, size, weight, and habit, each test driver

maintains his/her own profile in the database The

ergonomic studies are to find the correspondence

between the entering or exiting behaviour patterns

and the parameters in the profile of a test driver

Typically, one experiment will involve a number of

system configurations and several test drivers This

experiment is completed when all of the combinations

of system configurations and test drivers are tried

During the entire process, the Eclipse-based GUI

system provides the visualisation of the reconfigurable

platform and the motion tracking results based on the

feedback of the cameras

7 Conclusion

A reconfigurable experimental platform has been

developed for the ergonomic studies in the design of

vehicle exteriors The adjustments of the major

components are automated, and data acquisition and

processing of the adjustment is performed by

compu-ter This platform is expected to reduce the preparation

time of ergonomic study significantly

The virtual model of the reconfigurable platform

has been developed For the sake of cost saving,

most of the parts used in the design are adopted

directly from commercially available products It

has been visually verified that no collision has

occurred among the adjustable components

except for the components with overlapping

motion ranges The potential collisions of those

components with overlapping motion ranges are

predictable and can be easily avoided throughthe control The number of moveable compo-nents can be increased or reduced to accommo-date different needs of ergonomic studies Two GUI systems have been developed One is aLabview-based GUI system to control thephysical model and the other is an Eclipse-basedGUI system to visualise the operation of thereconfigurable platform in real-time The controlsystem has been developed in Labview with theintegration of Solidworks’ Cosmosmotion Anoperator can directly control the reconfiguration

of the platform via a Labview control panel Thevisualisation system is developed in Java It hasachieved a satisfactory responding speed in terms

of the motion simulation of the reconfigurableplatform, and it possesses the advantage thatJava programming environment incurs no cost To investigate the feasibility of using the motiontracking system in the ergonomic study of areconfigurable platform, (i) six avatar models areattached with 42 markers respectively, and threeposes have been created for each type of avatars,(ii) the integrated virtual model with avatars hasbeen generated in Envision and exported toMockup software, (iii) the lights and cameramodels have been established and integrated withthe virtual model to check the visibility of allmarkers on an avatar Visually checking in the

VR environment has shown that it is feasibile touse the motion tracking system in the ergonomicstudy after the physical reconfigurable platformhas been developed

AcknowledgementsThe author would like to thank his formal colleagues Dr.Sherman Lang, Dr Ian Wang, and Mr Steve Kruithof at theNational Research Council Canada for their numerousassistances, and thank Dr Lenora Hardee and Mr James

A Nooks at the International Truck and Engine Corp.(ITEC) for giving unique opportunity to learn their designneeds through a collaborative project Thanks also for thefinancial support from the Office of Research and ExternalSupport (ORES) of the Indiana University–Purdue Uni-versity Fort Wayne through his startup research fund

ReferencesBal, M., Manesh, H.F., and Hashemipour, M., 2008 Virtual-reality-based information requirements analysis tool forCIM system implementation: a case study in die-castingindustry International Journal of Computer IntegratedManufacturing, 21 (3), 231–244

Bi, Z.M., Lang, S.Y.T., Orban, P., and Verner, M., 2007 Anintegrated design toolbox for tripod-based parallelkinematic machines ASME Journal of Mechanical De-sign, 129 (8), 799–807

Trang 23

ETC, 2009 Marker placement guide, Entertainment

Technol-ogy Center, Carnegie Mellon University, www.etc.cmu

edu/projects/mastermotion/Documents/markerPlacement

Guide.doc

Jun, Y., Liu, J., Ning, R., and Zhang, Y., 2005 Assembly

process modeling for virtual assembly process planning

International Journal of Computer Integrated

Manufac-turing, 18 (6), 442–451

Kobe, G., 1995 Reconfigurable interior buck Automotive

Industries, 175 (2), 180–183

Lee, M.W., 1999 High touch - A new strategy for ergonomic

product design Advances in Occupational Ergonomics

and Safety, 3, 3–8

Mavrikios, D., Karabatsou, V., Fragos, D., and

Chryssolouris, G., 2006 A prototype virtual

reality-based demonstrator for immersive and interactive

simulation of welding processes International Journal

of Computer Integrated Manufacturing, 19 (3), 294–

300

Neumann, W.P., Kihlberg, S., and Medbo, P., 2002 A case

study evaluating the ergonomic and productivity impacts

of partial automation strategies in the electronics

industry International Journal of Production Research,

40 (16), 4059–4075

NI Corp., 2009 NI LabVIEW-SolidWorks Mechatronics

Toolkit (Alpha version 01), http://zone.ni.com/devzone/

cda/tut/p/id/6183

Pappas, M., Karabatsou, V., Mavrikios, D., and

Chryssolouris, G., 2006 Development of a web-based

collaboration platform for manufacturing product and

process design evaluation using virtual reality techniques

International Journal of Computer Integrated

Manufac-turing, 19 (8), 805–814

Parkinson, M.B., Reed, M.P., Kokkolaras, M.P., and Panos,Y., 2007 Optimizing truck cab layout for driveraccommodation Transactions of the ASME, Journal ofMechanical Design, 129, 1110–1117

Prefix Corp., 2009 The Prefix story http://www.prefix.com/Story/

Reed, M., 2009 Truck ingress/egress safety: field andlaboratory research, http://www.thsao.on.ca/docs/truck_ingress_egress.pdf

Rubio, E.M., Sanz, A., and Sebastian, M.A., 2005 Virtualreality applications for the next-generation manufactur-ing International Journal of Computer Integrated Man-ufacturing, 18 (7), 601–609

Scapin, D.L and Bastien, J.M.C., 1997 Ergonomic criteria forevaluating the ergonomic quality of interactive systems.Behaviour and Information Technology, 16 (4–5), 220–231.Vigano, G., Mottura, S., Greci, L., Sacco, M., and Boer,C.R., 2004 Virtual reality as a support tool in the shoelife cycle International Journal of Computer IntegratedManufacturing, 17 (7), 653–660

Wiendahl, H.-P and Fiebig, T.H., 2003 Virtual factorydesign—a new tool for a co-operative planningapproach International Journal of Computer IntegratedManufacturing, 2003, 16 (7–8), 535–540

Yeow, P.H.P and Sen, R.N., 2003 Quality, productivity,occupational health and safety and cost effectiveness ofergonomic improvements in the test workstations of anelectronic factory International Journal of IndustrialErgonomics, 32 (3), 147–163

Trang 24

Discrete particle swarm optimisation combined with no-wait algorithm in stages for

scheduling mill roller annealing process

Jiafu Tang* and Jiwei SongDepartment of Systems Engineering, Key Laboratory of Integrated Automation of Process Industry, Northeastern University

(NEU), Shenyang, Liaoning, 110004, P R China(Received 28 November 2009; final version received 5 July 2010)

In this paper, a mill roll annealing operation scheduling problem is investigated against the background of a caststeel plant of Machinery & Mill Roll Co., Ltd in mainland China During the annealing processes, a roller needs to

be processed in multistage process with different types of furnaces and there is no waiting between the stages Based

on the analysis of the feature of the mill roll annealing process, this problem can be formulated as a no-wait hybridflow shop scheduling problem, of which the ‘job’ is a batch of rough roll, and the ‘machine’ is a heating furnace Aseach batch will be of different sizes, the jobs will have different sizes and processing times According to the no-waitcharacteristic between two sequential operations of a job, the no-wait algorithm in stages is designed to obtain theinitial solution Combined with the no-wait algorithm in stages, a discrete particle swarm optimisation algorithm hasbeen developed to solve the integer programming model In the simulation experiment with real data, theapplicability and the effectiveness of algorithms are demonstrated by the comparisons and the analysis of theexperiment results, and the equipment reformation strategies of the actual reference value are given as well which isbeneficial for the policy-maker to arrange production reasonably Furthermore, some large-scale instances about theno-wait hybrid flow shop scheduling problem are studied effectively

Keywords: mill roll annealing process; no-wait hybrid flow shop; discrete particle swarm optimisation; no-waitalgorithm in stages

1 Introduction

Mill roll production is a complicated multistage

process with both continuous and discrete features,

which is composed of smelting, molding, casting,

annealing and the craft machining In general, during

the production process of mill roll, the annealing

process takes the time of 7 to 30 days, which accounts

for more than half of the production cycle Although,

the enterprise production orders are increasing

con-tinuously, the number of the heating furnaces

(ma-chines) for the mill roll annealing process (MRAP)

typically are not increased, which makes MRAP

become the bottleneck of the whole production flow

In this sense, how to schedule the annealing process

becomes an important problem in order to improve the

production efficiency of MRAP under the current

working conditions This is the motivation of this

paper

In light of the annealing process requirement, the

rough mill rollers assigned in the same furnace for

annealing should have similar annealing temperature

curves To this end, the rough mill rolls must be

aggregated into families of rough mill rolls before the

MRAP, and then the rough mill rolls with the same

family are grouped into batches Each batch of roughmill rolls is only composed of the identical type ofrough mill rolls with similar processing temperatures.Different process routines are needed to meet differenttypes of mill roll in a roller manufacturing enterprise;there is a single-stage annealing process (1SAP), e.g.high-temperature furnace (HTF), two-stage annealingprocess (2SAP) for transforming from high-tempera-ture furnace to low-temperature furnace(LTF), andthree-stage annealing process (3SAP) starting from thehigh-temperature furnace(HTF), then transforming to

a low-temperature furnace (LTF), and finally to a pitfurnace (PF) process This paper focuses on the two-stage annealing process (2SAP) and three-stageannealing process (3SAP) In this paper, a practicalHFS scheduling problem is investigated derivedfrom an annealing shop of the cast steel plant ofSinosteel Xingtai Machinery & Mill Roll Co., Ltd(Xingmachinery)

According to the requirements of practical tion technology, 2SAP and 3SAP can be viewed as aprocess in which the rough mill roller after casting isprocessed through continuous heating and coolingfrom HTF to LTF and from HTF to LTF and PF

produc-*Corresponding author Email: jftang@mail.neu.edu.cn

Vol 23, No 11, November 2010, 979–991

ISSN 0951-192X print/ISSN 1362-3052 online

Ó 2010 Taylor & Francis

DOI: 10.1080/0951192X.2010.506655

Trang 25

respectively, and is finally formed to the mill roller.

Basing on the analysis of the feature of the MRAP, all

batches of the rough mill roll are processed through

two or three working procedures in the same direction,

of which the waiting time between any two working

procedures is not allowed and each working procedure

has multiple different kinds of machines The

produc-tion flow of 2SAP and 3SAP can be seen as a no-wait

hybrid flow shop (NWHFS) scheduling problem, of

which the ‘job’ is a batch of rough roll and the

‘machine’ is a heating furnace In this paper, two-stage

and three-stage NWHFS scheduling problems are

studied, in which rough mill rolls are processed from

HTF to LTF and from HTF to LTF and PF

respectively

The authors have not found any work on

two-stage and three-two-stage NWHFS scheduling problem

for the mill roller annealing process in the literatures

However, there is some related work on the hybrid

flow shop (HFS) scheduling problem, which is viewed

as an extension of the flow shop scheduling problem

with one machine at each working procedure And

the flow shop scheduling problem has been

exten-sively studied (Ben-Daya and Al-Fawzan 1998, Riad

and Portmann 2006, Christos and George 2007, Bin

et al 2009) One of the earliest works that deal with

the problems related to this paper was researched by

Arthanary and Ramaswamy (1971), where the

authors proposed a B&B algorithm for a simple

HFS scheduling problem with only two stages Gupta

(1988) has shown that the two-stage flow shop

scheduling problem with parallel processors to

mini-mise makespan is NP-hard Later a different B&B

algorithm which consisted of three parts was

pre-sented and the three parts of the algorithm were

lower bound calculation, branching and node

elim-ination respectively (Brah and Hunsucker 1991) In

another work to enhance the efficiency of B&B

procedures, satisfiability tests and time-bound

adjust-ments based on energetic reasoning and global

operations were implemented (Neron et al 2001)

Gupta and Tunc (1991) proposed two polynomial

bounded heuristic algorithms to find an acceptable

solution in order to minimise makespan The

algo-rithms could solve a problem with only two stages,

where the first stage had only one machine and the

second stage allowed multiple identical machines

Gupta and Tunc (1994) also presented four heuristic

algorithms to minimise makespan for the problem with

separable setup and removal times Botta-Genoulaz

(2000) has considered scheduling in HFS environment

with precedence constraints and time lags to minimise

maximum lateness To solve this problem, six

heur-istics were presented and the performance and the

robustness of the algorithms were assessed Ruiz and

Maroto (2006) developed a new heuristic algorithmbased on genetic algorithm for a HFS schedulingproblem with unrelated parallel machines per stage,sequence-dependent setup times and machine eligibil-ity They tested the effectiveness of the algorithmthrough several experiments with the industrial datataken from companies for the ceramic tile manufactur-ing sector Hamid Allaoui and Abdelhakim Artiba(2006) investigated the two-stage HFS schedulingproblem with only one machine on the first stage and

m machines on the second stage to minimise themakespan The Branch and Bound model was given,and the LIST algorithm, LPT algorithm and H-heuristic were proposed respectively MohammadReza and Mohammad Ali (2009) studied the problem

of parallel batch scheduling in a hybrid flow shopenvironment with minimising Cmax, in which boththree heuristic algorithms and a three dimensionalgenetic algorithm were developed Wang et al (2005)considered a two-stage HFS scheduling problem withno-wait time between two sequential operations of ajob and no-idle time between two consecutive pro-cessed jobs on machines of the second stage Theyshowed the problem complexity and presented aheuristic algorithm with asymptotically tight errorbounds

In addition, combined with the practical problem,

it can also be seen in the literatures that manyresearchers have considered some heuristic andLagrangian relaxation (LR) algorithms for solvingthe HFS scheduling problem Hung-Tso Lin andChing-Jong Liao (2003) addressed a two-stage HFSscheduling problem taken from a label sticker manu-facturing company, whose objective is to schedule oneday’s mix of label stickers through the shop such thatthe weighted maximal tardiness was minimised Aheuristic composed of TST at stage1 and FIFO atstage2, and the recursive alteration procedure wasproposed Chen et al (2007) proposed a taboo search(TS) algorithm for a HFS scheduling problem withprecedence and blocking constraints against the con-tainer terminal The performance of the taboo searchalgorithm was analysed from the computational point

of view Xuan and Tang (2007) proposed a batchdecoupling based on the LR algorithm for the HFSscheduling problem against manufacturing process ofiron and steel to minimise the weighted completiontime and the penalty of job waiting

Motivated by the above literatures, the authorsknow that the applications of the discrete particleswarm optimisation (DPSO) algorithm on the HFSscheduling problem are relatively few in the previousresearch Tseng and Liao (2008) proposed a DPSOalgorithm to solve the hybrid flow-shop schedulingproblems with multiprocessor tasks In this paper,

Trang 26

however, the two-stage and three-stage NWHFS

scheduling problems against the background of

MRAP are studied As the DPSO algorithm is not

applied to solve the NWHFS scheduling problem at

present, a novel DPSO algorithm is proposed to solve

the two-stage and three-stage NWHFS scheduling

problems against the background of MRAP

In this paper, both two-stage and three-stage

NWHFS scheduling problems are considered in

terms of the practical feature of the scheduling

problem of two-stage and three-stage annealing

process in a typical mill roller production system

An integer programming model of the NWHFS

scheduling problem for a typical mill roller annealing

process is formulated For solving the model, the

discrete particle swarm optimisation combined with a

no-wait algorithm in stages (DPSO -NWAS) is

designed

The rest of the paper is organised as follows: In

Section 2, after introducing the background of MRAP,

an integer programming model of the NWHFS

scheduling problem is presented for a typical mill

roller annealing process The DPSO-NWAS is

de-signed in Section 3 Simulation results in Section 4

demonstrate the applicability and the effectiveness of

the DPSO-NWAS The equipment reformation

strate-gies of the actual reference value are proposed as well

Finally, Section 5 concludes the paper with a short

In the practical mill roller production process,different rough mill rollers have different annealingtemperature curves, so it is required to aggregate theminto families of rough mill rolls which have similarannealing temperature curves before the MRAP.Moreover, there are two types of heating furnace(resistance furnace, gas furnace) in the MRAP, andeach type of furnace has two different sizes Thefamilies of rough mill rolls after aggregating areprocessed in the same furnace for a processing periodfrom 7 to 30 days approximately Therefore, theMRAP has two characteristics compared with othercraft production: with a long production period andcomplicated production process Through the above

Figure 1 Manufacturing process for the mill roll

Trang 27

introduction, it can be seen that the MRAP plays a

special important role in the mill roll production flow

The work-flow for a typical MRAP is given in Figure 2,

in which the batches of rough mill roll after casting are

processed from HTF to LTF, and finally to PF

2.2 Problem description

The NWHFS scheduling problem consists of a series of

production stages and each stage has several machines

operating in parallel Some stages may have only one

machine, but at least one stage must have multiple

machines The flow of jobs through the workshop is

unidirectional The processing time of each job at each

stage is known Each job is processed by one machine

in each stage and it must go through one or more

stages The job waiting is not allowed between any two

working procedures Machines in each stage can be

identical, similar or even unrelated

Aiming at the problem, a batch of rough mill roll

family must be put in identical HTFs, LTFs and PFs at

the same time in the practical production system of

MRAP Therefore, a batch of rough mill roll families is

called a job In addition, in order to reduce the

complexity of the problem, some assumptions have

been made as follows:

(1) K is adequate so that all the jobs will be able to

complete the process of the annealing

operation

(2) The difference of volume of the machines

during the practical industry production is not

considered (identical machine)

(3) The batches of rough mill roll are given and

determined before the 2SAP and 3SAP process

(4) The start time unit of the first job in the first

stage is one

2.3 The modelParameters:

N: total number of jobs(batches), (i¼ 1,

2, , N£, where i is the index of all jobs,M: total number of stages, (j ¼ 1, 2, , M),where j is the index of all stages,

Mj: total number of machines in the jth stage,(m¼ 1, 2, , Mj), where m is the index ofmachines in the jth stage,

K: total number of time units to be considered,(k¼ 1, 2, , K), where k is the index ofall time units,

tij: the processing time of job i at stage j,

Cmax: the completion time of the last job;Decision variables:

Xi¼ the start time unit of job i

Yijk¼ 1 if job i is processed at stage j in time unit k;

0 otherwise:



With the above notation of the NWHFS ing problem under our consideration, the model can beformulated as follows:

schedul-min Cmax ¼ min maxN

i¼1



XiþXM j¼1

ð2Þ

Figure 2 Work-flow for a typical MRAP

Trang 28

In the model, the objective function is to minimise

the makespan Constraints (2) and (3) ensure that a

job at each working procedure occupies one of the

machines until the processing at this stage is finished

Both the time unit before all the jobs are processed in

the first stage and the time unit after all the jobs are

processed in the last stage are zero Constraint (4)

reflects the relationship between Xi and Yijk

Con-straint (5) ensures that the number of the jobs does

not exceed the machine capacity limit at stage j

Finally constraints (6) and (7) define the range of the

variables

We know that a general HFS scheduling problem

with the objective of minimising makespan is NP-hard

In this paper, however, a NWHFS scheduling problem

with the objective of minimising makespan will be

taken into account in which the wait time between any

two sequential working procedures of a job is not

allowed Owing to the characteristic of the problem, a

novel DPSO-NWAS algorithm is designed to find

satisfactory results

3 DPSO-NWAS for NWHFS problem

In this section, the background of PSO is briefly

introduced and a DPSO-NWAS is proposed to solve

the NWHFS scheduling problem Clearly, the discrete

particle needs to be redesigned to represent a

proces-sing sequence of N jobs To move an invalid particle to

a new feasible solution (a new sequence), a feasible

solution adjustment (FSA) method is developed tails are given in the sections below

De-3.1 Background of PSOThe PSO algorithm is one of the latest meta-heuristicmethods Based on the metaphor of social interactionand communication such as bird flocking and fishschooling, PSO was first introduced to optimisevarious continuous non-linear functions by Kennedyand Eberhart (1995) In their paper, to find the optimalsolution, each bird called a particle adjusts itssearching direction according to two factors, its ownbest previous experience (pbest) and the best experi-ence of all the members (gbest) Shi and Eberhart(1998) named the former the cognition part and thelatter the social part For the social part, Kennedy andEberhart (1995) developed the so-called gbest and lbestmodels for the neighbourhood structure of particles Inthe gbest model, the companions’ flying experience isobtained in the population, i.e., the original PSOversion In the lbest model, the companions’ flyingexperience is obtained in the local neighbourhood.However, many optimisation problems are set in aspace featuring discrete or qualitative distinctionsbetween variables For meeting this need, Kennedyand Eberhart (1997) developed a discrete version ofPSO This algorithm differs from the continuous PSOalgorithm in two aspects On the one hand, the particle

is composed of the binary variable On the other hand,the velocity must be transformed into the change ofprobability, which is the chance of the binary variabletaking the value one

3.2 Basic idea of DPSO-NWASThe applications of PSO on the NWHFS schedulingproblem is still considered limited, although theadvantages of PSO include a simple structure, animmediately accessible practical applications, an easeimplementation and the rapid speed to acquiresolutions However, the major obstacle of successfullyapplying a PSO algorithm to this problem is itscontinuous nature To remedy this drawback, a novelDPSO algorithm is presented to solve the NWHFSscheduling problem with the objective of minimisingmakespan In this paper, the DPSO-NWAS that wepropose differs from the general DPSO algorithm inthree characteristics First, DPSO and NWAS arecombined effectively for solving the NWHFS schedul-ing problem Second, the sequential encoding of theparticle is designed Third, a FSA method is presented,which can keep the sequential encoding of particle (thefeasible solution) after updating of the position and thevelocity of the particles

Trang 29

The flow chart of the DPSO-NWAS is depicted in

Figure 3

(1) The number of initiative solutions that is equal

to the size of initial population is generated

randomly

(2) The completion time of all jobs for every

initiative solution (feasible solution) is

calcu-lated by NWAS

(3) The minimised makespan of this iteration is

calculated by DPSO and the feasible solution of

the next iteration is obtained by FSA

(4) The other iterations will be operated as the

above 2) * 3) until the stop criterion is

satisfied At last, the minimised makespan is

obtained

3.3 NWAS for NWHFS scheduling problem

In this paper, the MRAP process that the authors

consider can be seen as a HFS with a no-wait

constraint between any two working procedures

Therefore, how to design a no-wait algorithm under

meeting the constraint of machines becomes one of the

important and difficult problems For solving this

difficulty, the NWAS is designed, the primary idea of

which is described as follows:

First, according to the initial processing order of all

the jobs, the completion time of all the jobs at the

first stage is calculated under meeting the number

restriction of machines Second, a non-decreasingorder is got in accordance with the completion time

of all the jobs in the first stage, and according to thenon-decreasing order, all the jobs are processed insequence at the second stage Third, under meeting thenumber restriction of machines in the second stage, theearliest completion time of processing job at the secondstage is compared with the completion time of the nextwaiting processing job at the first stage, and themaximum time between the two jobs be viewed asthe start time of the next waiting processing job at thesecond stage, and then the first stage completion time

of those jobs which are not processed at the secondstage are revised according to the difference valuebetween those two jobs At last, the completion time ofall the jobs at the second stage is calculated accord-ingly At the same time, the completion time of all thejobs at M stage (M 3) will be gotten as well inaccording with the basic idea of NWAS Specially, thewaiting time of all the jobs between any twocontinuous stages is zero The flow chart of the two-stage NWAS is depicted in Figure 4

The notations in Figure 4 are as follows:

M1 the machine number in the first workingprocedure,

M2 the machine number in the second workingprocedure,

tn the processing time of the nth job in the firstworking procedure in sequence S1, (n¼

M1þ 1, ,N),

tn the processing time of the nth job in the secondworking procedure in sequence S2, (n¼ M2þ 1, , N),

Tn the completion time of the nth job in the firstworking procedure in sequence S1, (n¼ M1þ 1, , N),

Tn0 the completion time of the nth job in the firstworking procedure in sequence S2, (n¼ M2þ 1, , N),

Tn the completion time of the nth job in thesecond working procedure in sequence S2, (n¼

Xb¼ ðxb ; xb ; ; xb

ai2 I, where xb

ai is the ithjob of the N jobs processed with respect to the athparticle at the bth iteration Thus, each particle denotesFigure 3 Flow chart of DPSO-NWAG

Trang 30

Figure 4 Flow chart of two-stage NWAG.

Trang 31

a processing sequence of the N jobs The current

position of the ath particle is updated as follows:

Xbþ1a ¼ Xb

3.4.2 Updating of velocity

For the NWHFS scheduling problem, we define the

velocity of the ath particle at the bth iteration as

Vb¼ ðvb ; vb ; ; vb

ai2 I, where Vb

ai is the ment for the ith job of the ath particle from the current

move-position value to the next one at the bth iteration The

velocity updating step is to adjust the searching

direction of the particle The current velocity of the

ith dimension of the ath particle is updated as follows:

denotes the best solution that the ath particle has

obtained until the bth iteration, and

Pb ¼ ðPb ; Pb ; ; Pb

gNÞ denotes the best solutionobtained from particles in the population at the bth

iteration Moreover, inertia weight w is the inertia

weight which is a parameter to control the impact of

the previous velocities on the current velocity, c1and c2

are acceleration coefficients and x and Z are pseudo

random numbers between [0, 1]

3.4.3 Feasible solution adjustment

In this paper, the particle encoding presents a

sequential encoding method, so it is an important

problem that the particle encoding still keep the

sequential encoding (the feasible solution) after

updat-ing of the position and the velocity of the particles Forsolving this problem, a FSA method is designed andexplained with the following instance:

Assuming the number of the jobs is 6 (N¼ 6), and

an initial position of particle Xb¼ ð2; 1; 4; 3; 5; 6Þ israndomly generated Then a new position of particle

the initial position Obviously, Xbþ1a ¼ð6; 1; 9; 5; 3; 8Þ is not a sequential encoding, so X0ashould be changed into a feasible solution Theinstance of the feasible solution adjustment can beshown as follows:

Step 1: Let the elements in Xbþ1a ¼

ð6; 1; 9; 5; 3; 8Þ less than 1 be 1, theelements in Xbþ1

a ¼ ð6; 1; 9; 5; 3; 8Þmore than 6 be 6 and the others be notchanged Then Xbþ1

a ¼ ð6; 1; 6; 5; 3; 6Þ

is generated

Step 2: Let the repetitive elements in Xbþ1a ¼

ð6; 1; 6; 5; 3; 6Þ keep an original valuerandomly and the other repetitive ele-ments be 0 Then Xbþ1a ¼ð6; 1; 0; 5; 3; 0Þ is generated

Step 3: A feasible solution X0a¼ ð2; 3; 1; 4; 5; 6Þ

is presented randomly Let the elements

in X0a¼ ð2; 3; 1; 4; 5; 6Þ that is equal tothe elements in Xbþ1a ¼ ð6; 1; 0; 5; 3; 0Þ

be 0 and the others be not changed.Then X0a¼ ð2; 0; 0; 4; 0; 0Þ is generated.Step 4: Substitute all the non-zero elements in

X0a¼ ð2; 0; 0; 4; 0; 0Þ for all the zeroelements in Xbþ1

a ¼ ð6; 1; 0; 5; 3; 0Þ derly, and then the feasible solution

namely, Xbþ1a ¼ ð6; 1; 2; 5; 3; 4Þ.Through the above description of FSA, an initialprocessing sequence Xbþ1a ¼ ð6; 1; 2; 5; 3; 4Þ, correla-tive parameters of DPSO and the adjust process ofFSA are given in Table 1

Table 1 An instance for FSA

Trang 32

4 Computational results

To test the solution quality and the performance

of the approach described in Section 3, some

experiments are conducted and analysed Theseexperiments are divided into two parts First thereal-world instances from Xingmachinery are con-sidered and then some large-scale problem instances.For demonstrating the effectiveness of DPSO-NWAS, the heuristic of first come-first process(FCFP), the heuristics of the longest total processingtime (LTPT) and the shortest total processing time(STPT) are proposed respectively The details of theheuristics are as follows:

FCFP: the first come batch is processed first

during the processing

LTPT: the batches are in non-increasing order

of the total processing times (At present,

it is used in the practical productionsystem.)

Table 4 Experimental results for the rough mill roll family of two-stage

Figure 5 Relative error for the optimal fitness of two-stage

Table 5 Experimental results for the rough mill roll family of three-stage

Trang 33

STPT: the batches are in non-decreasing order

of the total processing times

To get representative problem instances, the actual

production data from Xingmachinery have been

examined The scheduling horizon is set to be 30

days, as this study to solve the MRAP scheduling

problem is for one month approximately The number

of rough mill roll families that are processed through

two-stage or three-stage is not more than 20 batches or

10 batches respectively Generally, the number of

LTFs is 10, the proportion of HTFs to LTFs is 1 to 2

approximately and the number of PFs is 4 in the

practical industry production The processing time

of rough mill roll families is shown in Table 2 and

Table 3, and the experimental results are listed in Table

4 and Table 5 respectively

Here, the objective of the NWHFS scheduling

problem is to minimise the makespan So the fitness of

the DPSO-NWAS is the makespan of all the batches,

namely fitness¼ Cmax In addition, the population size

Kis 50 and the iteration number J is 500 in the

DPSO-NWAS The algorithms are implemented using Visual

Cþþ, and all the experimentations are carried out on a

Pentium-IV 3.0-GHz PC

As can be seen from Table 4 and Table 5, the

DPSO-NWAS is better than those three heuristics by

comparing the optimal fitness In order to show

intuitively the quality of the optimal solutions, the

optimal fitness relative errors of the four algorithms

are compared as shown in Figures 5 and 6 The

relative errors (RE) can be computed as the following

if the number of batches processed is small (N 5 10),the rate of machines between HTFs and LTFs has noobvious impact on the final optimal results, namely,

RE does not change obviously However, along withthe number of batches increasing step by step(N 10), the optimal results with the same number

of machines between HTFs and LTFs are better thanthat with the different number of machines betweenHTFs and LTFs (such as Instance 13, Instance 14 andInstance 17) and the RE change obviously At thesame time, it can be clearly seen that the instance 9 inFigure 6 also supports our analysis

Table 6 Experimental results of two-stage for DPSO-NWAS and CPLEX

Trang 34

three-According to the above analysis, the authors

advised the production decision-makers of

Xingma-chinery to try to ensure that the number of machines in

HTFs is the same as that in LTFs so that the

production efficiency of MRAP can be promoted

effectively when N 10 However, in the practical

production system, the number of machines in HTFs is

always less than that in LTFs Some relevant

adjust-ments will be done between HTFs and LTFs On the

one hand, production decision-makers can increase thequantity of HTFs properly under considering the cost

of production On the other hand, they can transformthe LTFs into the HTFs Through the above adjust-ments, the arrangement of the enterprise resources willprovide further optimisation for promoting the pro-duction efficiency of MRAP

To illustrate further the advantage and mance of DPSO-NWAS, the instances of Table 4 and

perfor-Table 9 Experimental results of three-stage for large-scale problems

Table 8 Experimental results of two-stage for large-scale problems

Trang 35

Table 5 are computed again through the CPLEXoptimiser The maximum computation time of theCPLEX optimiser is set to 10 hours The optimalfitness and the computation time obtained by theDPSO-NWAS and the CPLEX optimiser are given inTable 6 and Table 7.

In addition, some large-scale NWHFS schedulingproblems were studied properly In this study, theauthors have attempted to combine three heuristicsinto the proposed DPSO-NWAS (called DPSO-

respectively), in which the optimal solutions of thesethree heuristics in Table 4 and Table 5 are viewed asthe initial solutions of DPSO-NWAS respectively Inthe experiment, 11 large-scale instances were designedwith N¼ 50, 100, 200, M1¼ 6, 8, 10, 15, 20 and

M2¼ 10, 12, 15, 20 for two-stage NWHFS schedulingproblem, at the same time, 11 large-scale instanceswere also designed with N¼ 50, 100, 200, M1¼ 6, 8,

10, 15, 20, M2¼ 10, 12, 15, 20 and M3¼ 5, 6, 8, 10 forthree-stage NWHFS scheduling problem The experi-mental results of the four algorithms are shown inTables 8 and 9

From the experimental results presented in Tables 6

to 9, the following observations can be made about theNWHFS scheduling problem

(1) From Table 6 and Table 7, it can be seen clearlythat the results of the optimal fitness obtained

by DPSO-NWAS are almost equal to these byCPLEX optimiser (except instance 7 and 11 inTable 6 and instance 6 and 9 in Table 7).Furthermore, from the perspective of the com-putation time, the proposed DPSO-NWAS isgreatly faster than CPLEX, particularly theCPLEX optimiser cannot get the optimal fitness

in the maximum computation time when thenumber of batches (N) is more than 16

(2) The results from Table 8 and Table 9 can beattributed to the fact that the DPSO-NWAS-L

is more effective than the DPSO-NWAS for thelarge-scale NWHFS scheduling problem

To further show the entire research results, aclearer outline of the entire research results is shown

in Table 10

5 Concluding remarks

In this paper, the authors have addressed a real-lifescheduling problem in an annealing shop of Xingma-chinery The MRAP problem can be reduced down tothe NWHFS scheduling problem, with an integerprogramming model being built for solving theproblem The DPSO combined with NWAS is

Trang 36

proposed to solve the NWHFS scheduling problem.

The final simulation experiments not only demonstrate

the applicability and the efficiency of the

DPSO-NWAS but also show that the efficiency of the

enterprise production is improved further

Further-more, the equipment reformation strategies of MRAP

are proposed which are valuable for the enterprise

policy-makers to arrange the production reasonably

Lastly, some large-scale instances about the NWHFS

scheduling problem are studied effectively through

DPSO-NWAS which are combined with three

heuristics

Although the NWHFS scheduling problem about

the roller annealing operation is studied in this paper,

the authors only consider the DPSO algorithm for

solving NWHFS scheduling problem Therefore, it is

expected that other optimisation methods will be taken

into consideration for solving NWHFS scheduling

problem in the future In addition, the NWHFS

scheduling problem for non-identical machines is also

an important future research area

Acknowledgements

This work was financially supported in part by National

Natural Science Foundation of China under Grant 70721001

and 70625001, the Fundamental Research Funds for the

Central Universities of MOE (N090204001), National Basic

Research Program of China (2009CB320601) The authors

are greatly indebted to the referees for the invaluable

suggestions

References

Arthanary, T.S and Ramamurthy, K.G., 1971 An extension

of two machines sequencing problem Operations

Re-search, 8 (4), 10–22

Ben-Daya, M and Al-Fawzan, M., 1998 A tabu

search approach for the flow shop scheduling

pro-blem European Journal of Operational Research, 109,

88–95

Bin, Q., et al., 2009 An effective hybrid DE-based algorithm

for multi-objective flow shop scheduling with limited

buffers Computers and Operations Research, 36 (1), 209–

233

Botta-Genoulaz, V., 2000 Hybrid flow shop scheduling with

precedence constraints and time lags to minimize

maximum lateness International Journal of Production

Economics, 64, 101–111

Brah, S.A and Hunsucker, J.L., 1991 Branch and

bound algorithm for the flow shop with multiple

processors European Journal of Operational Research,

51 (4), 88–99

Chen, L., et al., 2007 A tabu search algorithm for the

integrated scheduling problem of container handling

systems in a maritime terminal European Journal of

Operational Research, 181 (1), 40–58

Christos, Koulamas, and George J Kyparisis, 2007 A note

on the two-stage assembly flow shop scheduling problemwith uniform parallel machines European Journal ofOperational Research, 182 (2), 945–951

Eberhart, R.C and Kennedy, J., 1995 A new optimizer usingparticle swarm theory In: Proceedings of the sixthinternational symposium on micromachine and humanscience, 1995, 39–43

Gupta, J.N.D., 1988 Two-stage hybrid flowshop schedulingproblem Journal of the Operational Research Society, 39(4), 359–364

Gupta, J.N.D and Tunc, E.A., 1991 Schedules for a twostage hybrid flowshop with parallel machines at thesecond stage Production Research, 29 (2), 1489–1502.Gupta, J.N.D and Tunc, E.A., 1994 Scheduling a two-stagehybrid flow shop with separable setup and removal times.European Journal of Operational Research, 77 (3), 415–428

Hamid, A and Abdelhakim, A., 2006 Scheduling two-stagehybrid flow shop with availability constraints Computersand Operations Reseach, 33 (5), 1399–1419

Lin, H-T and Liao, C-J., 2003 A case study in a two-stagehybrid flow shop with setup time and dedicatedmachines International Journal of Production Economics,

86 (2), 133–143

Kennedy, J and Eberhart, R.C., 1995 Particle swarmoptimization In: IEEE International Conference, NeuralNetworks,1942–1948

Kennedy, J and Eberhart, R.C., 1997 A discrete binaryversion of the particle swarm algorithm In: Proceedings

of the world multiconference on systemics, cybernetics andinformatics, 1997, 4104–4109

Mohammad, RA-N and Mohammad, AB-N., 2009.Hybrid flow shop scheduling with parallel batching.International Journal of Production Economics, 117 (1),185–196

Neron, E., Baptiste, P., and Gupta, J.N.D., 2001 Solvinghybrid flow shop problem using energetic reasoning andglobal operations Omega, 29 (6), 501–511

Riad, A and Portmann, M-C., 2006 Flow shop schedulingproblem with limited machine availability: A heuristicapproach International Journal of Production Economics,

99 (1–2), 4–15

Ruiz, R and Maroto, C., 2006 A genetic algorithm forhybrid flow shop with sequence dependent setup timesand machine eligibility European Journal of OperationalResearch, 169 (3), 781–800

Shi, Y and Eberhart, R.C., 1998 A modified particle swarmoptimizer In: Proceedings of the IEEE Congress onEvolutionary Computation, 1998, 69–73

Tseng, C-T and Liao, C-J., 2008 A particle swarmoptimization algorithm for hybrid flow-shop schedulingwith multiprocessor tasks International Journal ofProduction Research, 46 (17), 4655–4670

Wang, Z.B., Xing, W.X., and Bai, F.S., 2005 No-waitflexible flow shop scheduling with no-idle machines.Operations Research Letter, 33 (6), 609–614

Xuan, H and Tang, L.X., 2007 Scheduling a hybridflowshop with batch production at the last stage.Computers and Operations Research, 34 (9), 2718–2733

Trang 37

A practical approach for partitioning free-form surfaces

Nguyen Van Tuong* and Prˇemysl Pokorny´

Department of Manufacturing Systems, Technical University of Liberec, Liberec, Czech Republic

(Received 9 December 2009; final version received 14 June 2009)

This paper describes a simple but effective method for partitioning free-form surfaces Here, based on the surfacecurvatures, a free-form surface can be successfully partitioned into convex, concave and saddle regions The chaincode technique in the image processing field is applied to determine the boundaries of each region These regions can

be separately machined by different kinds and sizes of cutting tool Therefore, the machining time can be reducedsignificantly A B-spline surface is given as an example to demonstrate the proposed method including partitioningand machining A Matlab1

program was developed to perform the surface partitioning task and to get thecoordinates of all points on the boundaries Pro/Engineer1

Wildfire 2.0 was used to create the computer-aideddesign (CAD) model of the free-form surface and generate the tool path for each region for illustration purposes.Keywords: free-form surface, surface partitioning, boundary, chain code

1 Introduction

Free-form surfaces, also called sculptured surfaces, are

widely used in the mould and die, automotive,

aero-space and shipbuilding industries These surfaces are

often produced in three stages: roughing, finishing and

benchwork (Warkentin et al 2001) During the rough

machining, the material removal rate is high, results in

high cutting forces and the surface quality is not so

important On the other hand, in the finish machining

stage, the material removal rate is low and the tool is

not subjected to high cutting forces This stage also

requires high surface quality The surface after

finishing still leaves scallops hence the benchwork

consisting of grinding and polishing is required to

remove these scallops This manual stage is

time-consuming and the manual polishing causes errors

and undesirable irregularities The time spent on

the finishing and benchwork stages is dependent on

the size of the scallops It is stated that over 78% of the

total production time is spent on finishing, grinding

and polishing (Warkentin et al 2001)

Traditionally, free-form surfaces are machined on

three-axis numerical control (NC) machines using

ball-end cutters In this case, the tool axis is fixed and easy

to position with respect to the part surface In general,

the overall productivity and material removal rate of

finish surface machining with ball-end cutter is very

low (Wang 2005) and the process is inefficient (Jensen

1993, Vickers and Quan 1999) In five-axis machining

of free-form surfaces, the tool can be ball-end cutter,

toroidal cutter or flat-end cutter and the tool posture

can be changed during machining Five-axis machiningoffers numerous advantages in producing free-formsurfaces Five-axis machining reduces the machiningtime and improves the surface finish

Regardless of machining in three- or five-axis mode,the productivity will be low if there is only one cuttingtool used to machine the entire surface This is because

of the limitation of the tool diameter that depends onthe surface curvature When choosing a tool for aparticular surface, the diameter tool is restricted to adetermined value that must not cause local gougingwhen the tool is machining in concave and saddleregions If a ball-end cutter is used, the cutter curvaturemust be larger than the maximum normal curvature ofthe machined surface to avoid local gouging (Choi andJerard 1999) For toroidal and flat-end cutters, to avoidlocal gouging, the effective cutting curvature of the toolmust be greater than the maximum normal curvature ofthe surface (Rao and Sarma 2000, Li and Zhang 2005,2006) Obviously, it will take a long time if there is onlyone tool utilised to machine the entire surface duringthe machining period

In general, a free-form surface has regions such asconvex, concave, plane and saddle (Elber and Cohen

1993, Rogers 2001, Radzevich 2008) Therefore, if theconcave and saddle regions are machined separately byone or two suitable tools and the other regions aremilled faster by a bigger tool, then the machining timewill be reduced significantly To do this, the designsurface should be partitioned into regions In CAD/CAM (computer-aided manufacturing) packages such

*Corresponding author Email: tuongnv@gmail.com

Vol 23, No 11, November 2010, 992–1001

ISSN 0951-192X print/ISSN 1362-3052 online

Ó 2010 Taylor & Francis

DOI: 10.1080/0951192X.2010.506656

Trang 38

as Catia1, Pro/Engineer1, Unigraphics NX1,

users cannot separately choose every region and its

boundary in the CAM programming stage, so that the

surface should be partitioned into patches in the

modelling stage

The objective of this research is to develop a

practical machining method that can decrease the

machining time and improve the surface quality by

subdividing free-form surfaces into convex, concave

and saddle regions The concave regions can be milled

by small ball-end cutters, while the other regions can be

machined by bigger ball-end cutters or flat-end mills

This paper mainly addresses our work on the surface

partitioning method based on the surface curvatures

The outline of the paper is as follows Section 2

presents some typical works on surface partitioning

The mathematical representation of free-form surface

and surface properties are introduced in Section 3 The

surface partitioning method is presented in Section 4

and the implementation is in Section 5 Finally, Section

6 gives some conclusions and future work

2 Related work

Up to now, much work has been done to develop

methods for surface partitioning that are suitable for

particular cases The following are some typical

researches in surface partitioning of free-form surfaces

In the study of Chen et al (2003), the geometric

properties of free-form surfaces such as Gaussian and

mean curvatures, maximum and minimum curvatures

and the surface normal are used to form a

multi-dimensional vector for surface partitioning First, in

the rough surface subdivision, based on the surface

curvatures, the grid points are roughly grouped into

three regions namely convex, concave, and saddle

Then, the subtractive fuzzy clustering method and the

fuzzy C-means method are used to perform the fine

surface subdivision The former is used for estimating

the number and centers of the surface patches and the

latter is applied to optimise the locations of cluster

centers for the grid points of the patch The patch

boundaries are then defined by using the Voronoi

diagram This method can divide a free-form surface

into simple patches so that each patch can be properly

accessed in a part setup and efficiently machined on a

3-axis computer numerical control (CNC) machine

equipped by a tilt-rotary table (3½½-axis CNC

machining) In this work, a free-form surface is

subdivided into a number of easy-to-machine surface

patches which is many more than that of the convex,

concave and saddle regions on the surface In fact, as

in the example illustrated in their research, a

two-region B-spline surface was divided into 14 patches

However, too many subdivided patches could lead to

increase the machining time, even in five-axismachining

Roman et al (2006) also used fuzzy C-meansmethod to partition free-form surface for 3½½- axisCNC machining In their study, the geometric proper-ties of the surface are used as surface partitioningparameters and the patch boundaries are delimited byusing the nearest neighbour method in the u–v plane.The surface is partitioned into patches ranging fromone to 16 (this range can be modified) Based on theestimated machining time for all the partitions, the onewith the shortest time is accepted for machining.Roman (2007) also utilised the k-means clusteringmethod to subdivide a surface into patches and theminimum intra-class distance method was implementedfor the identification of boundaries of these patches InRoman and his colleagues’ studies, a free-form surface

is also divided into a large number of patches for abetter match between the tool and the workpiece.Although a technique for selecting the optimumnumber of partitions is applied, the number of patches

is still many larger than the number of convex, concaveand saddle regions on the surface Therefore, thesemethods also have the same drawback as in the methodpresented in the study of Chen et al (2003)

Ding et al (2003, 2005) used concepts of ‘isophote’and ‘light region’ for surface partitioning Isophotesare points on the surface which have the same angle,isophote angle, between the surface normal and a givenreference direction A light region is a region in whichthe isophote angles are smaller than a prescribed value.The boundary of this region is an isoline that consists

of isophotes By applying isophotes, a free-formsurface is partitioned into different regions according

to the slope made with the normal direction of parallelintersecting planes From the two examples in thisstudy, it can be seen that a light region (partitionedregion) can lie across two regions including a convexregion and a concave region and so forth Thischaracteristic causes a limitation in choosing the tooldiameters to machine the design surface Hence, it isimpossible to use optimal cutters

Park and Choi (2001) presented procedure toextract cutting areas of a free-form surface by usingZ-map model Depending on what the Z-map stores,the extracted areas may correspond to various features

to be machined such as fillets, uncut areas, walls, floorsand contour curves Hence, cutting areas correspond-ing to a specific slope range or curvature range can beobtained

Radzevich (2005) introduced an approach in which

a given free-form surface can be divided into accessible and cutter-not-accessible regions To resolvethe problem of surface partitioning, focal surfaces forthe part surface and the machining surface of the

Trang 39

cutter-cutting tool were applied Based on implementation of

focal surfaces, the cutting-tool-dependent

characteris-tic surface was introduced as a new type of

character-istic surfaces This enables the derivation of the

equation for the boundary curve of the

cutting-tool-accessible region This approach is only convenient

for particular cases Later, the author solved this

problem to valid for all cases of machining of a

smooth, regular, free-form surface on multi-axis NC

machine (Radzevich 2007)

By analysing the curvature of a free-form surface,

Elber and Cohen (1993) developed another method for

surface partitioning In their work, the second order

surface analysis is used to understand the curvature

characteristics and the shape of the surface They

developed a hybrid method using symbolic and

numeric operators to compute curvature properties

and then to create trimmed regions which are solely

convex, concave, and saddle The same method was

used to determine the bounds of the regions Be´zier

and non-uniform rational B-spline (NURBS) surface

representations were used and all surfaces were created

using the Alpha_1 solid modeller They used the curve

on which Gaussian curvatures K(u,v) equal zero to

divide free-form surfaces It means that the curve on

which Gaussian curvatures are zero should be defined

Yet, Gaussian curvatures K(u,v) is a high order

expression of u and v even for low order NURBS

surface patches (Li and Zhang 2004) Hence, it is very

difficult to solve analytically the equation above From

their paper, it can be seen that high-level math skills

are needed to solve the problems of their approach and

a high level of programming is required for

imple-mentation Furthermore, they did not carry out any

tool path generation or machining time estimation to

validate the method Elber also introduced the

trichotomy of an arbitrary free-form surface into

convex, concave, and saddle regions in another paper

(Elber 1995)

In this paper, a simple but effective method for

surface partitioning is proposed Here, free-form

surfaces are partitioned into regions such as convex,

concave, and saddle regions, based on the surface

curvatures The boundaries of each region are defined

by using the chain codes method in the image

processing field

3 Free-form surface representation and surface

geometry

3.1 Free form surface representation

Free-form surfaces are usually mathematically defined

by parametric forms such as Bezier, B-spline and

NURBS These surfaces are the most popular

forms that define free form surfaces in CAD/CAM

systems In this study, B-spline surfaces are used asexamples

A B-spline surface with control points is defined by(Rogers 2001)

Sðu; vÞ ¼Xnþ1

i¼1

Xmþ1

j¼1Bi;jNi;kðuÞMj;lðvÞ ð1Þ

umin  u  umax; vmin  v  vmaxwhere Ni,k(u) and Mj,l(v) are the B-spline basisfunctions in the biparametric u and v directions,respectively; Bi,jare the vertices of a polygonal controlnet; k and l are the orders of the basic functions, k and

In practice, a complex free-form surface consists of

a large number of trimmed surfaces, and each surfacecan be represented in any other form This paper dealsonly with a single tensor-product B-spline surface Theselection of this surface representation method doesnot influence the result of the proposed surfacepartitioning method

3.2 Surface geometrySome important geometric parameters of a surface,which are relevant to surface points, are surfacecurvatures and surface normal The local surface shapearound a point can be recognised these parameters.The followings are some geometric parameters of asurface (Rogers 2001, Radzevich 2008)

The unit normal vector, n, at a point (u, v) on asurface S(u, v) can be computed from the relation:

Trang 40

The first fundamental form, F1, is defined by

The Gaussian curvature, K, and the mean curvature,

H, of the surface S(u, v), at a point P(x, y, z), is

are the maximum and minimum of the normal

curvature They are given by:

Kmax ¼ H þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

H2 Kp

Kmin ¼ H  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

H2 K

Depending on the values of K and H, the local

surface shape around the point can be divided into

four different types as follows (Makhanov and

(3) K 5 0 and H6¼ 0: local surface shape is saddle

(4) K¼ 0 and H ¼ 0: local surface shape is plane

4 Surface partitioning and definition of region

boundaries

4.1 Surface partitioning

The objective of surface partitioning is to divide a

free-form surface into a number of regions that have the

same characteristics These regions can be machined

separately with different types of cutter and setups Inthis study, based on the Gaussian curvature and themean curvature, a free-form surface is partitioned intoconvex (including plane regions), concave, and saddleregions The combination between convex and planeregions results in a reduction of the number of regions,hence the calculating time will be reduced Thefollowing are main steps for surface partitioning.(1) Presenting the mathematical model of the free-form surface

(2) Sampling the surface in u and v directions toget a set of grid points

(3) Calculating Gaussian curvature and meancurvature at each grid point

(4) Grouping the neighbouring concave and saddlepoints to form concave regions and saddleregions, respectively The last portion of pointsforms convex regions

The specific algorithm for surface partitioning is asfollows:

Algorithm: partitioning free-form surfacesInput

A free-form surface S(u,v) {mathematical model}Output

Sets of points belonging to three regions {convexþplane, concave and saddle}

Begin(1) Sample the input surface to get a set of gridpoints {P} and store all points in matrix M(2) Calculate parameters K and H at every point

Pi,j(3) FOR each grid point Pi,j of the set point {P}

IF K 0 and H 5 0 THEN save that point inmatrix M1 {data matrix contains points on convexand plane regions}

END IF

IF K 0 and H 4 0 THEN save that point inmatrix M2 {data matrix contains points on concaveregions}

END IF

IF K 5 0 THEN save that point in matrix M3{datamatrix contains points on saddle regions}

END IFEND LOOPEnd

In this study, the above algorithm was grammed with MATLAB1 7.6.0 Matrices M1, M2and M3 have the same size as that of the matrix M.They contain the identification numbers of pointswhich belong to convex, concave and saddle regions,

Ngày đăng: 19/07/2016, 20:12

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm