Keywords: grid; collaborative virtual assembly; complex product; real-time collaborative simulation Notation CDM collision detection model CVA collaborative virtual assembly CVAE collabo
Trang 2Sources of variability in the set-up of an indoor GPSCarlo Ferria*, Luca Mastrogiacomoband Julian Farawayca
Via XI Febbraio 40, 24060 Castelli Calepio, BG, Italy;bDISPEA, Politecnico di Torino, Corso Duca degli Abruzzi 24, Torino
10129, Italy;cDepartment of Mathematical Sciences, University of Bath, Bath BA2 7AY, UK
(Received 17 June 2009; final version received 5 January 2010)
An increasing demand for an extended flexibility to model types and production volumes in the manufacture oflarge-size assemblies has generated a growing interest in the reduction of jigs and fixtures deployment duringassembly operations A key factor enabling and sustaining this reduction is the constantly expanding availability ofinstruments for dimensional measurement of large-size products However, the increasing complexity of thesemeasurement systems and their set-up procedures may hinder the final users in their effort to assess whether theperformance of these instruments is adequate for pre-specified inspection tasks In this paper, mixed-effects andfixed-effects linear statistical models are proposed as a tool to assess quantitatively the effect of set-up procedures onthe uncertainty of measurement results This approach is demonstrated on a Metris Indoor GPS system (iGPS) Themain conclusion is that more than 99% of the variability in the considered measurements is accounted for by thenumber of points used in the bundle adjustment procedure during the set-up phase Also, different regions of theworkspace have significantly different error standard deviations and a significant effect on the transient duration ofmeasurement This is expected to affect adversely the precision and unbiasedness of measurements taken with IndoorGPS when tracking moving objects
Keywords: large scale metrology; large volume metrology; distributed coordinate measuring systems; Indoor GPS;iGPS; uncertainty
1 Introduction
During the last decades research efforts in
coordinate-measuring systems for large-size objects have led to a
broadening of the range of instruments commercially
available (cf Estler et al 2002)
These coordinate measurement instruments can be
grouped into two categories: centralised and
distrib-uted systems (Maisano et al 2008)
A centralised instrument is a measuring system
con-stituted by a single hardware element that in
perform-ing a measurement may require one or more ancillary
devices such as, typically, a computer An example of a
centralised instrument is a laser tracker that makes
use of a spherically-mounted reflector (SMR) to take
a measurement of point spatial coordinates and that
needs to be connected to a monitor of environmental
conditions and to a computer
A distributed instrument is a collection of separate
independent elements whose separately gathered
mea-surement information needs to be jointly processed in
order for the system to determine the coordinates of a
point A single element of the system typically cannot
provide measurements of the coordinates of a point
when standing alone Precursors of these apparatuses
can be identified in wireless indoor networks of sensors
for automatic detection of object location (cf Liu
et al 2007) These networks can be deployed for pection tasks in manufacturing operations once theirtrueness has been increased The term trueness isdefined in BS ISO 5725-1:1994 as ‘the closeness ofagreement between the average value obtained from alarge series of test results and an accepted referencevalue’ (Section 3.7)
ins-When inspecting parts and assemblies having largedimensions, it is often more practical or convenient
to bring the measuring system to the part rather thanvice versa, as is typically the case on a smaller scale.Therefore, instruments for the inspection of large sizeobjects are usually portable In performing a measure-ment task, a single centralised instrument, say a lasertracker, can then be deployed in a number of differentpostions which can also be referred to as stations Bymeasuring some fixed points when changing station,the work envelope of the instrument can be signifi-cantly enlarged enabling a single centralised instru-ment to be used for inspection of parts significantlylarger than its original work envelope To illustrate thisconcept, in Figure 1(a) the top view of three geome-trical solids, a cylinder, a cube and an octahedron(specifically a hexagonal prism) is displayed These
*Corresponding author Email: info@carlo.comyr.com
International Journal of Computer Integrated Manufacturing
Vol 23, No 6, June 2010, 487–499
ISSN 0951-192X print/ISSN 1362-3052 online
Ó 2010 Taylor & Francis
DOI: 10.1080/09511921003642147
Trang 3solids are inspected by a single centralised instrument
such as a laser tracker, which is moved across different
positions (1, 2, , 6 in the figure) from each of
which the coordinates of the points P1, P2 and P3 are
also measured In this respect, a single centralised
system appears therefore comparable with a
distri-buted system, whose inherent multi-element nature
enables work envelopes of any size to be covered,
provided that a sufficient number of elements are
chosen This characteristic of a measuring system of
adapting itself to suit the scale of a measuring task is
often referred to as scalability (cf Liu et al 2007) The
concept above can therefore be synthesised by saying
that a centralised system is essentially scalable in virtue
of its portability, whereas a distributed system is such
due to its intrinsic modularity
With a single centralised instrument, measurement
tasks within a working envelope, however extended,
cannot be performed concurrently but only serially
Each measurement task to be performed at a certain
instant in time needs a dedicated centralised
instru-ment This is shown in Figure 1(a) where the cylinder is
measured at the current instant with the instrument in
position 2, whereas the hexagonal prism is going to be
measured in a future instant when the instrument will
be placed in position 3 With a distributed system this
limitation does not hold With a distributed system,
concurrent measurement tasks can be performed
prov-ided that each of the concurrent tasks has a sensor or
subgroup of sensors dedicated to it at a specific instant
within the distributed instrument In Figure 1(b), the
same three objects considered in the case of a
cen-tralised instrument are concurrently inspected using a
distributed system constituted by six signal transmitter
elements (1, 2, , 6) and three probes, each carrying
two signal receiving elements whereby the coordinates
of the probe tips are calculated
This characteristic of distributed systems is cially advantageous when concurrently tracking theposition of multiple large-size components duringassembly operations The sole way of performing thesame concurrent operation with a centralised systemwould require the availability and use of more than asingle centralised instrument (laser tracker, for in-stance), with potentially-detrimental economic conse-quences on the manufacturing organisation in terms ofincreased fixed assets, maintenance costs and increasedcomplexity of the logistics
espe-A number of different distributed systems havebeen developed recently, some as prototypes for re-search activities (cf., for instance, Priyantha et al 2000;Piontek et al 2007), some others with a level ofmaturity sufficient for them to be made commerciallyavailable (cf., for instance, Welch et al 2001; Maisano
et al 2008) In this second case, the protection ofintellectual property (IP) rights prevents users’ trans-parent access to the details of the internal mechanismsand of the software implemented in the systems Thismay constitute a barrier to a full characterisation ofthe performance of the equipment This investigationendeavours to provide better insight into the perfor-mance of such systems by using widespread statisticaltechniques The main objective is therefore not tocriticise or evaluate the specific instrument consideredthereafter, but to demonstrate the use of techniquesthat may be beneficially deployed also on otherdistributed systems In particular, the effect of discre-tionary set-up parameters on the variability andstability of the measurement results has been analysed
In the next section the main characteristics ofthe Metris iGPS, which is the instrument considered,are described A cone-based mathematical model ofthe system is then presented in Section 3 The experi-mental set-up is described in Section 4 and the results
Trang 4of the tests are analysed in Section 5 Conclusions are
drawn thereafter
2 Physical description of the instrument
The instrument used in this study is the iGPS (alias
indoor GPS) manufactured by Metris The description
of such a system provided in this section is derived
from publicly available information
The elements constituting the system are a set of
two or more transmitters, a number of wireless sensors
(receivers) and an unit controlling the overall system
and processing the data (Hedges et al 2003; Maisano
et al 2008)
Transmitters are placed in fixed locations within
the volume where measurement tasks are performed
Such a volume is also referred to as a workspace
Each transmitter has a head rotating at a constant
angular velocity, which is different for each
transmit-ter, and radiates three light signals: two infrared
fan-shaped laser beams generated by the rotating head,
and one infrared strobe signal generated by light
emitting diodes (LEDs) The LEDs flash at constant
time intervals ideally in all directions, but practically in
a multitude of directions Each of these time intervals
is equal to the period of revolution of the rotating head
on which the LEDs are mounted For any complete
revolution of the rotating head a single flash is emitted
virtually in all directions In this way, the LED signals
received by a generic sensor from a transmitter
constitute a periodic train of pulses in the time domain
where each pulse is symmetric (cf Hedges et al 2003,
column 6)
The rotating fan-shaped laser beams are tilted by
two pre-specified opposite angles, f1and f2(e.g 730
and 308, respectively) from the axis of rotation of the
head These angles are also referred to as slant angles.The fact that the angular velocity of the head isdifferent for different transmitters enables each trans-mitter to be distinguished (Sae-Hau 2003) A schematicrepresentation of a transmitter at the instant t1 whenthe first fanned beam L1 intersects the sensor inposition P and at the instant t2 when the second fannedbeam L2 passes through P is shown in Figure 2, wheretwo values for the slant angles are also shown Ideally,the shape of each of the fanned beams should beadjustable to adapt to the characteristics of the mea-surement tasks within a workspace Although twobeams are usually mounted on a rotating head, confi-gurations with four beams per head have also beenreported (Hedges et al 2003, column 5) To differ-entiate between the two fanned beams on a transmit-ter, their time position relative to the strobe signal isconsidered (see Figure 2)
The fanned beams are often reported as planar(Liu et al 2008; Maisano et al 2008), as depicted inFigure 2 Yet, the same beams when emitted fromthe source typically have a conical shape that is firstdeformed into a column via a collimating lens andthen into a fan-shape via a fanning lens (Hedges et al
2003, column 6) It is believed that only an ideal chain
of deformations would transform completely and fectly the initial conical shape into a plane For thesereasons, the final shape of the beam is believed topreserve traces of the initial shape and to be moreaccurately modelled with a portion of a conical sur-face, rather than a plane Each of the two conicalsurfaces is then represented by a vector, called a conevector, that is directed from the apex to the centre ofthe circular directrix of the cone The angle between acone vector and any of the generatrices on the conesurface is called the cone central angle This angle is
International Journal of Computer Integrated Manufacturing 489
Trang 5designated by a1 and a2 for the first and the second
beams, respectively The apex of each cone lies on the
axis of rotation of the spinning head In Figure 3, a
schema of the portion of the conical surface
represent-ing a rotatrepresent-ing laser beam is displayed In this figure,
two portions of conical surfaces are shown to
illus-trate a2 and f2 (f24 0, having established
counter-clockwise angle measurements around the x-axis as
positive)
The angular separation between the optical axes of
the two laser modules in the rotating head is denoted
by yoff, when observed from the direction of the
rota-tional axis of the spinning head The rotation of the
head causes each of the cone surfaces, and therefore
their cone vectors, to revolve around the same axis
The angular position of the cone vector at a generic
instant is denoted by y1(t) and y2(t) for the first and
second fanned beams, respectively These angles are
also referred to as scan angles and are defined relative
to the strobe LED synchronisation signal, as illustrated
below
Wireless sensors are made of one or more
photo-detectors and a wireless connection to the controlling
unit for the transmission of the positional information
to the central controlling unit The use of the
photo-detectors enables the conversion of a received signal
(stroboscopic LED, first fanned laser, second fanned
laser) into the instant of time of its arrival (t0, t1 and t2
in Figure 2) The time intervals between these instants
can then be converted into measurements of scan
angles from the knowledge of the angular velocity of
the head for each transmitter (o in Figure 2) It is
expected that y1¼ o 6 (t17t0) and that y2¼ o 6
(t27t0) At the instant t0 when the LED signal reaches
the generic position P, the same LED signal also
flashes in any direction Therefore, at the very same
instant t0, the LED fires also in the reference directionwhere the angles in the plane of rotation are measuredfrom (i.e y1¼ y2¼ 0)
In this study, any plane orthogonal to the axis ofrotation is referred to as a plane of rotation For anyspherical coordinate system having the rotational axis
of the transmitter as the z-axis and the apex common
to the aforementioned conical surfaces as the origin,the angle y1swept by the cone vector of the first fannedbeam in the time interval t17t0 is connected with theazimuth of P measured from any possible referencedirection x established in the xy-plane, which is theplane of rotation passing through the common apex ofthe conical surfaces
From a qualitative point of view, the elevation(or the zenith) of P can be related to the quantity
o 6 (t27t1) By analogy with Figure 2, it is arguedthat, also in the case of conical fanned shaped beams,when the elevation (or zenith) of P is increasing(decreasing), the time interval t27t1 is also increasing.Vice versa, the reason why a time interval t27t1 islarger than another can only be found in the fact thatthe position of the sensor in the first case has a higherelevation than in the second
In the most typical configuration, two receivers aremounted on a wand or a bar in calibrated positions Atip of the wand constitutes the point for which thelocation is calculated based on the signals received bythe two sensors When the receivers are mounted on abar, the bar is then often referred to as vector bar Ifsuch a receivers-mounted bar is short, say with a lengthbetween 100 and 200 mm, it is then called a mini vectorbar These devices are equipped with firmware pro-viding processing capabilities The firmware enablesthe computation of azimuth and elevation of the wand
or bar tip for each of the spherical reference systemsassociated with each of the transmitters in the system.This firmware is called a position computation engine(PCE)
A vector bar therefore acts as a mobile ment for probing points as shown in the schema ofFigure 1(b) More recently, receiving instruments withfour sensors have been developed, enabling the user toidentify both the position of the tip and the orientation
instru-of the receiving instrument itself
3 The role of the bundle adjustment algorithms in theindoor GPS
The computation of the azimuth and elevation ofthe generic position P in the spherical reference system
of a generic transmitter enables the direction of theoriented straight line l from the origin (the apex of thecones) to P to be identified However, it is not possible
to determine the location of P on l In other words, it is
Trang 6not possible to determine the distance of P from the
origin Therefore, at least a second transmitter is
nece-ssary to estimate the position of P in a user arbitrarily
predefined reference system {Uref} In fact, assuming
that the position and orientation of the ith and jth
transmitters in {Uref} are known, then the coordinates
of the generic point on liand on ljcan be transformed
from the spherical reference system of the transmitters
to the common reference system {Uref} (cf Section 2.3
in Craig 1986) Then, P can be estimated with some
nonlinear least squares procedure, which minimises the
sum of the squared distances between the estimates of
the coordinates of P in {Uref} and the generic point on
li and lj Only in an ideal situation would li and lj
intersect In any measurement result, the azimuth and
elevation are only known with uncertainty (cf Sections
2.2 and 3.1 in JCGM 2008) Very little likelihood exists
that these measured values for li and lj coincide with
the ‘true’ unknown measurands The same very little
likelihood applies therefore to the existence of an
intersection between liand lj When adding a third kth
transmitter, qualitative geometrical intuition supports
the idea that the distances of the optimal P from each
of the lines li, ljand lkare likely to be less variable until
approaching and stabilising around a limit that can be
considered typical for the measurement technology
under investigation Increasing the number of
trans-mitters is therefore expected to reduce the variability of
the residuals The estimation of the coordinates of P,
when the position of the transmitters is known, is often
referred to as a triangulation problem (Hartley and
Sturm 1997; Savvides et al 2001)
If the position and orientation of the transmitters
in {Uref} are not known, then they need to be
deter-mined before the actual usage of the measurement
system To identify the position and orientation of a
transmitter in {Uref}, six additional parameters need
to be estimated (cf Section 2.2 in Craig 1986) This
more general engineering problem is often referred to
as three-dimensional (3D) reconstruction and occurs in
areas as diverse as surveying networks (Wolf and
Ghilani 1997), photogrammetry and computer vision
(Triggs et al 2000; Lourakis and Argyros 2009) The
estimation of three-dimensional point coordinates
to-gether with transmitter positions and orientations to
obtain a reconstruction which is optimal under a
pre-specified objective function and an assumed errors
structure is called bundle adjustment (BA) The
objec-tive or cost function describes the fitting of a
mathe-matical model for measurement procedure to the
experimental measurement data Most often, but not
necessarily, this results in minimising the sum of the
squares of the deviations of the measurement data
from their values predicted with nonlinear functions of
the unknown parameters (Triggs et al 2000; Lourakis
and Argyros 2009) A range of general purpose misation algorithms, such as for instance those ofGauss–Netwon and Levenberg–Marquardt, can beused to minimise the nonlinear objective function.Alternatively, significantly increased efficiency can begained if these algorithms are adjusted to account forthe sparsity of the matrices arising in the mathematicaldescription of 3D reconstruction problems (Lourakisand Argyros 2009)
opti-In the measurement system investigated, a BAalgorithm is run in a set-up phase whereby the posi-tion and orientation of each transmitter in {Uref} aredetermined Therefore, during the subsequent deploy-ment of the system (measuring phase), the coordinates
of a point are calculated using the triangulationmethods mentioned above
However, as is typically encountered in commercialmeasurement systems, the BA algorithms implemented
in the system are not disclosed completely to the users.This makes it difficult for both users and researchers todevise analytical methods to assess the effects of thesealgorithms on the measuring system In this investiga-tion, consideration is given to experimental design andstatistical techniques to estimate the effect that deci-sions taken when running the built-in BA algorithmexert on measurement results
4 Experimental set-upFour transmitters were mounted on tripods and placed
at a height of about two metres from floor level Thedirection of the rotational axis of each transmitterspinning head was approximately vertical Each of thefour transmitters was placed at the corners of anapproximate square of side about eight metres
A series of six different targets fields labelled I, II,III, IV, V and VI and respectively consisting of 8, 9, 10,
11, 12 and 13 targets was considered during the BAprocedure Each of these fields was obtained by addingone target to the previous field, so that the first eighttargets are common to all the fields, the first ninetargets are common to the last five fields and so on Aschema of this experimental configuration is shown inFigure 4
All the fields were about 1.2 m above floor level.The target positions were identified using an isostaticsupport mounted on a tripod which was moved acrossthe workspace A set of the same isostatic supports wasalso available on a carbon-fibre bar that was used toprovide the BA algorithm built in the system with arequested measurement of length (i.e to scale thesystem) A distance of 1750 mm between two isostaticsupports on the carbon-fibre bar was measured on acoordinate-measuring machine (CMM) The carbon-fibre bar was then placed in the central region ofInternational Journal of Computer Integrated Manufacturing 491
Trang 7the workspace The coordinates of the two targets
1750 mm apart were measured with iGPS and their
1750 mm distance was used to scale the system in all
the target fields considered In this way, the scaling
procedure is not expected to contribute to the
varia-bility of the measurement results even when different
target fields are used in the BA procedure Figure 5
shows an end of the vector bar used in this set-up (the
large sphere in the figure), while coupled with an static support (the three small spheres) during themeasurement of a target position on the carbon-fibrebar
iso-The BA algorithm was run on each of these sixtargets fields so that six different numerical descrip-tions of the same physical positions and orientations ofthe transmitters were obtained
Six new targets locations were then identified usingthe isostatic supports on the carbon-fibre bar men-tioned above Using the output of the BA executions,the spatial coordinates of these new target locationswere measured The approximate position of the sixtargets relative to the transmitters is shown in theschema of Figure 6
Each target measurement consisted in placing thevector bar in the corresponding isostatic support andholding it for about 30 s This enabled the measure-ment system to collect and store about 1200 records oftarget coordinates in {Uref} for each of the six targets
In this way, however, the number of records for eachtarget is different, owing to the human impossibility ofmanually performing the measurement procedure with
a degree of time control sufficient to prevent thissituation occurring
5 ResultsEach of the six target positions displayed in Figure 6and labelled 1, 2, 3, 4, 5, 6 was measured using each ofthe six BA set-ups I, II, , VI, giving rise to a
Trang 8grouping structure of 36 measurement conditions
(cells)
When measuring a target location, its three
Carte-sian coordinates in {Uref} are obtained To reduce the
complexity of the analysis from three-dimensional to
mono-dimensional, instead of these coordinates the
distance of the targets from the origin of {Uref} is
considered Central to this investigation is the
estima-tion of the effect on the target–origin distance due to
the choice of a different number of target points when
running the BA algorithm The target locations 1,
2, , 6 do not identify points on a spherical surface,
so they are at different distances from the origin of
{Uref}, regardless of any possible choice of such a
reference system These target locations therefore
contribute to the variability of the measurements of
the target–origin distance whereby the detection of a
potential contribution of the BA set-ups to the same
variability can be hindered To counteract this masking
effect, the experiment was carried out by selecting first
a target location and then randomly assigning all the
BA set-ups for that location to the sequence of tests
This was repeated for all the six target positions Such
an experimental strategy introduces a constraint to a
completely random assignment of the 36 measurement
conditions to the the run order In the literature (cf
Chapters 27, 16 and 8 in Neter et al 1996, Faraway
2005 and Faraway 2006, respectively), this strategy is
referred to as randomised complete block design
(RCBD) The positions of the targets 1, 2, , 6
constitutes a blocking factor identifying an
experi-mental unit or block, within which the BA set-ups are
tested The BA set-ups I, II, , VI constitute a
random sample of all the possible set-ups that differ
only in the choice of the location and number of points
selected when running the BA algorithm during the
system set-up phase On the other hand, the analysis
of the obvious contribution to the variability of the
origin–target distance when changing the location of
the targets would not add any interesting information
to this investigation These considerations lead to
de-scribing the experimental data of the RCBD with a
linear mixed-effects statistical model, which is first
defined and then fitted to the experimental data
5.1 Mixed-effects models
The distance dijof the i th (i¼ 1, , 6) target from
the origin measured when using the jth (j ¼ I, , VI)
BA procedure is modelled as the sum of four
contributions: a general mean m, a fixed effect ti due
to the selection of the i th target point, a random effect
bj due to the assignment of the jth BA set-up and a
random error eijdue to all those sources of variability
inherent in any experimental investigation that is not
possible or convenient to control This is described bythe equation
dij¼ m þ tiþ bjþ eij: ð1Þ
In Equation (1) and hereafter, the Greek symbols areparameters to be estimated and the Latin symbols arerandom variables In particular, the bj’s have zeromean and standard deviation sb; the eij’s have zeromean and standard deviation s The eij’s are assumed
to be made of independent random variables normallydistributed, i.e eij* N(0,s2) The same applies to the
bj’s, namely bj* N(0,s2b) The eij’s and the bj’s are alsoassumed to be independent of each other Under theseassumptions, the variance of dij, namely s2
d, is given bythe equation
s2d¼ s2bþ s2: ð2ÞUsing the terminology of the ‘Guide to the expression
of uncertainty in measurement’ (cf Definition 2.3 inJCGM 2008), sd is the standard uncertainty of theresult of the measurement of the origin–targetdistance
As pointed out in the previous section, the number
of the determinations of the target–origin distance thathave been recorded is different for each of the 36measurement conditions For simplicity of the analy-sis, the number of samples gathered in each of theseconditions has been made equal by neglecting thesamples in excess of the original minimum sample sizeover all the cells This resulted in considering 970observations in each cell The measurement resultprovided by the instrument in each of these conditionsand used as a realisation of the response variable dijinEquation (1) is then defined as the sample mean ofthese 970 observations There is a single measurementresult in each of the 36 cells The parameters of themodel, i.e m, ti, sband s, have been estimated by therestricted maximum likelihood (REML) method asimplemented in the lme() function of the packagenlme of the free software environment for statisticalcomputing and graphics called R (cf R DevelopmentCore Team 2009) More details about the REMLmethod and the package nlme are presented inPinheiro and Bates (2000) The RCBD assumes thatthere is no interaction between the block factor (targetlocations) and the treatment (BA set-up) Thishypothesis is necessary so that the variability within
a cell represented by the variance s2 of the randomerrors can be estimated when only one experimentalresult is present in one cell In principle, such anestimation is enabled by considering the variation ofthe deviations of the data from their predicted valuesacross all the cells This would estimate the variabilityInternational Journal of Computer Integrated Manufacturing 493
Trang 9of an interaction effect, if it were present If an
interaction between target locations and BA set-ups
actually exists, the estimate ^s of s provided in this
study would account for both interaction and error
variability in a joint way and it would not be possible
to separate the two components Therefore, from a
practical point of view, the more the hypothesis of no
interaction is violated, the more ^s overestimates s
After fitting the model, an assessment of the
assumptions on the errors has been performed on the
realised residuals, i.e the deviation of the experimental
results from the results predicted by the fitted model
for corresponding cells (eˆij¼ dij7dˆij) The realised
residuals plotted against the positions of the targets do
not appear consistent with the hypothesis of constant
variance of the errors In fact, as shown in Figure 7(a),
the variability of the realised residuals standardised by
^
s, namely ^eij¼ ðdij ^dijÞ=^s seems different in different
target locations
For this reason, an alternative model of the data
has been considered which accounts for the variance
structure of the errors This alternative model is
defi-ned as the initial model (see Equation 1), bar the
variance of the errors which is modelled as different in
different target locations, namely:
si¼ snew di; d1¼ 1: ð3Þ
From Equation (3) it follows that snewis the unknownparameter describing the error standard deviation inthe target position 1, whereas the di’s (i¼ 2, , 6) arethe ratios of the error standard deviation in the ithtarget position and the first
The alternative model has been fitted using one ofthe class variance functions provided in the packagenlmeand the function lme() so that also snewand the
di’s are optimised jointly with the other modelparameters (m, ti and sb) by the application of theREML method (Section 5.2 in Pinheiro and Bates2000)
For the alternative model, diagnostic analyses ofthe realised residuals were not in denial of its under-lying assumptions The standardised realisations of theresiduals, i.e ^eij¼ ðdij ^dijÞ=^si, when plotted againstthe target locations (Figure 7(b)) do not appear anylonger to exhibit different variances in differenttarget locations, as was the case in the initial model(Figure 7(a)) The same standardised realisations werealso found not to exhibit any significant departurefrom normality
The fact that all the target fields have more than50% of the targets in common together with the factthat each field has been obtained by recursively adding
a single target to the current field may cause theexperimenters to expect that the measurement resultsobtained when different target fields have been used in
alternative mixed-effects models
Trang 10the BA procedure have some degree of correlation If
that were the case, then the experimental results
should be in denial of the assumed independence of
the random effects bj’s The random effects, like the
errors, are unobservable random variables Yet,
algo-rithms have been developed to predict the realisations
of these unobservable random effects on the basis of
the experimental results and their assumed model
(Equations 1, 2 and 3 with the pertinent description
above) The predictor used in this investigation is
re-ferred to as the best linear unbiased predictor (BLUP)
It has been implemented in nlme and it is described,
for instance, in Pinheiro and Bates (2000) The
predicted random effects bˆj’s for the model and the
measurement results under investigation are displayed
in Figure 8(a) To highlight a potential correlation
between predicted random effects relative to target
fields that differ by only one target, the bˆjþ1’s have been
plotted against the bˆj in Figure 8(b) (j¼ 1, , 5)
From a graphical examination of the diagrams of
Figure 8 it can be concluded that, in contrast with what
the procedure for establishing the targets fields may
lead the experimenter to expect, the measurement
results do not appear to support a violation of the
hypothesis of independence of the random effects
Similar values for the BLUPs and therefore similar
conclusions can be drawn also for the initial
mixed-effect model (the BLUPs for the initial model have not
been reported for brevity)
As suggested in Pinheiro and Bates (2000) (Section
5.2, in particular), to support the selection between
the initial and the alternative model, a likelihood ratiotest (LRT) has been run using the generic functionanova()implemented in R A p-value of 0.84% led tothe rejection of the simpler initial model (8 parameters
to be estimated) when compared with the more plex alternative model (8þ 5 parameters to be esti-mated) The same conclusion would hold if theselection decision is made on the basis of the Akaikeinformation criterion (AIC) also provided in the out-put of anova() (read more about AIC in Chapters 1and 2 of Pinheiro and Bates 2000)
com-This model selection bears significant practicalimplications From a practitioner’s point of view, infact, selection of the alternative model means that therandom errors have significantly different varianceswhen measuring targets in different locations of theworkspace The workspace is not homogeneous: thereare regions where the variability of the random errors
is significantly lower than in others This also meansthat a measurement task can therefore be potentiallydesigned so that this measuring system can perform itsatisfactorily in some regions of its workspace but not
of random effects associated with consecutive targets fields
International Journal of Computer Integrated Manufacturing 495
Trang 11Estimates ^ti confirm the tautological significance of
the location of the targets or block factor, whereas ^m,
depending on the the parametrisation of the model,
can for instance be the centre of mass of the point
locations or can also be associated with a particular
target location (cf Chapters 13 and 14 in Faraway
2005) All these estimates do not convey any practical
information They are therefore not reported
The significance of the random effect associated
with the BA set-up procedure has been tested using a
likelihood ratio approach, where the alternative model
has been compared with a null model characterised
by an identical variance structure of the errors but
without any random effect (i.e sb¼ 0) The p-value
was less than 10732 under the assumption of a
chi-squared distributed likelihood ratio In reality, as
explained in Section 8.2 of Faraway (2006), such an
approach is quite conservative, i.e it tends not to reject
the null hypothesis by overestimating the p-value
However, given the extremely low p-value (510732),
there is strong evidence supporting the rejection of the
null hypothesis of an insignificant random effect (H0:
sb¼ 0)
From a practical point of view, this indicates that
caution should be exerted when selecting the target
locations for running the BA algorithm during the
set-up phase: when repeating the BA procedure during the
set-up with identical positions of the transmitters, the
consideration of a different number of targets
sig-nificantly inflates the variability of the final
measure-ment results
Substituting the estimates of Equations (4) and (5)
in the adaptation of Equation (2) to the alternative
model, it is derived after a few passages that the choice
of a different number of targets when running the BA
algorithm during the set-up phase accounts for 99.22,
99.94, 99.42, 99.89, 99.99 and 99.77% of the variance
of the measured origin–target distance when the target
is in locations 1, 2, 3, 4, 5 and 6, respectively If there
was no discretion left to the operator when selecting
the number of targets and their locations during the
BA procedure, then the overall variability of the final
results in each of the location tested could have been
reduced by the large percentages reported above
It may be worth pointing out that the designed
experiment considered in this investigation could be
replicated K times, on the same or in different days
The obtained measuring results could then be modelled
with the following equation:
dijk¼ m þ tiþ bjþ ckþ eijk ð6Þ
with ck N 0; s 2
; k ¼ 1; 2; ; K, being the dom effect associated with the kth repetition of the
ran-experiment The significance of the random effects c ’s
could then be tested in a similar way as the significance
of the bj’s has been tested above The practical use of themodel of Equation (6) is twofold First, it enables theexperimenter to detect if a significant source of varibilitycan be associated with the replication of the wholeexperiment For instance, if each replication takes place
in slightly different natural and/or artificial lightconditions, then testing the significance of the ckwouldtell if these enviromental conditions had significanteffects on the measurement results (dijk) The estimate ^sc
would quantify the increased variability of the responsevarible attributable to them Second, the increasednumber of measurements taken would raise theconfidence of the experimenter in the estimates of
^
sb; ^scand ^s For instance, it would dissipate (orconfirm) the suspicion that the experimenter may havethat the random effects attributed in Equation (1) to thedifferent setups, namely the bj’s, may be contributed to
by the natural variability due to repetition which wasestimated in Equations (4) and (5) This further studycan be considered as future work
5.2 Transient definition and analysis
In the above analysis, the average of all the 970experimental data in a cell has been considered Thevariability of each of these 970 determinations ofdistance, say st, is significantly larger than that of theiraverage (sd) If these determinations were mutuallyindependent, then it would be sd ¼ st= ffiffi
ð
p970Þ But thedeterminations are instead highly correlated, owing
to the fact that they are taken at varying samplingintervals of the order of milliseconds Identifying thecorrelation structure of these determinations is beyondthe scope of this investigation In this study, when theinstrument is measuring the tth determination, say
dt,ij, a running average of all the determinations sured until that instant, say dt,ij, is considered Aninteresting question that arises is: ‘How many deter-minations are sufficient for the instrument to provide ameasurement dt,ij that does not differ much from themeasurement result dij?’ A 2mm maximum deviationfrom dij has been considered for differentiating thesteady and the transient states of dt,ij The value t?has been used to identify the end of the transient Inother words, for any index t 4 t? it holds that
mea-jdt,ij7dijj 5 1mm
In Figure 9, for each of the 36 experimental ditions, two continuous horizontal lines 1 mm apartfrom the measurement result dijdelimit the steady-stateregion, whereas a single vertical dashed line indicatesthe transition index t?from the transient to the steadystate as defined above
con-From Figure 9, it is observed that for the sametarget location (panels in the same column) the
Trang 12transition from transient to steady state may occur at
different t?’s for different BA set-ups (different vertical
dashed lines in each panel) This suspicion is even
stronger when considering t? for the same BA set-up
but for different target locations (panels on a row in
Figure 9)
To ascertain whether the variation of t? with the
BA set-ups and with the target locations examined is
significant or is only the result of uncontrolled or
un-controllable random causes, the experimental values of
t? calculated starting from the RCBD already
dis-cussed have been analysed with a fixed-effects ANOVA
model (cf Section 16.1 in Faraway 2005) The values
of t? have been computed by an ad hoc function
implemented in R by one the authors The t?’s are
assumed as though they have been generated by the
following equation:
t?ij¼ m þ biþ gjþ eij; ð7Þwhere the bi’s and the gj’s are the effects of the blocking
factor (the target locations) and of the BA set-ups,
respectively, whereas the eij’s are the random errors,assumed independent, normally distributed with con-stant variance and zero mean The parameters havebeen estimated using the ordinary least squares method
as implemented in the function lm() in R (cf R velopment Core Team 2009) The assumptions under-lying the models have been checked on the realisedresiduals and nothing amiss was found To test thepotential presence of interaction between the twofactors in the form of the product of their two effects,
De-a Tukey test for De-additivity wDe-as De-also performed (cf.Section 27.4 in Neter et al 1996) This test returned ap-value of 30.43% It is therefore concluded that theexperimental data do not support the rejection ofthe hypothesis of an additive model in favour of thisparticular type of interaction effect of target locationsand BA set-ups on tij
The effect of the target positions on t? wassignificant, i.e H0 : bi¼ 0(i ¼ 1, 2, , 6) gives rise
to a p-value of 3.88% (under the hypotheses of themodel) However, the effect of the BA set-ups did notappear to be significant, i.e H : g ¼ 0(i ¼ 1, 2, , 6)
International Journal of Computer Integrated Manufacturing 497
Trang 13gives rise to a p-value of 84.96% (under the hypotheses
of the model)
From a practical point of view, there are two main
implications of these findings First, the selection of a
different number of targets when running BA
algo-rithms during the set-up phase does not appear to
have significant consequences on the duration of the
transient for obtaining a measurement Second, the
duration of the transient appears to be significantly
different for different target locations within the
work-space This may be stated as follows: there are regions
of the workspace that require longer transient periods
than others before a measurement result stabilises, and
this is expected to have consequences for the accuracy
and precision of the determination of the position of
moving objects (tracking)
In fact, if a target point is in motion at a speed
sufficient for a number of determinations greater than
t? to be recorded in each measured point of its
tra-jectory, then all the measurements results will be
representative of a steady state But this may hold for
only some portions of the target trajectory; for others,
characterised by a larger t?, such a condition may not
be satisfied with a consequent inflation of the
varia-bility of those estimated positions, which may also be
biased
6 Conclusions
The main characteristics of the Metris Indoor GPS
system have been reviewed on the basis of information
in the public domain In particular, the working
principles of the system have been presented in terms
of a cone-based mathematical model
The overall description of the system has been
instrumental in highlighting the key role of bundle
adjustment procedures during the set-up of the
sys-tem The selection of the number and location of target
points that are used when running the bundle
adjust-ment procedure during the set-up phase can be affected
by discretionary judgements exerted by the operators
To investigate the statistical significance of the
effects of this selection, a randomised complete block
design has been run on the distance between the origin
of the reference system and the measured positions of
target locations different from those used during the
bundle adjustment in the set-up phase This design
enhances the possibility for the potential effects of
different set-ups on the origin–target distances to be
detected by discriminating them from the obvious
effects of the target positions The set-ups considered
were different only in the number of the targets used
when executing the bundle adjustment procedure
A mixed-effects and a fixed-effects linear statistical
model were fitted to the measurements results using the
restricted maximum likelihood method and the nary least squares technique, respectively
ordi-The measurement results defined as the sampleaverage of the 970 determinations of distance recorded
in each target location for each set-up have been lysed with the mixed-effects model By analysing therealisations of the residuals, statistically different stan-dard deviations of the random errors were identifiedfor different target positions The work envelope ofthe instrument do not therefore appear homogeneous:
ana-in some areas the variability of the random error isgreater than in others, when performing measurements
of the distance of a target from the origin Owing tothis heterogeneity, the punctual estimates of the stan-dard uncertainty of the measured distances (sd) weredifferent for different target position and lay in a rangebetween 160.8 and 161.4 mm The different set-ups,tested to be statistically significant, always accountedfor more than 99.2% of the estimated standard un-certainty (the percentage varies for different targetpositions) This quantitative evidence suggests that theselection of points when running the bundle adjust-ment algorithms in the set-up phase should not beoverlooked Performing this selection in a consistentway according to some rule that ideally leads tochosing the same points when the transmitters are inthe same positions may be a course of action worthconsidering Also, for replication and comparisonpurposes, it may be advisable to quote the locations
of the targets used in setting up the system whenreporting the results of a measurement task
The duration of the transient, i.e the number ofdeterminations of distance needed for their currentaverage to be within +1 mm from the measurementresult (the average of the 970 determinations), has beenanalysed with the fixed-effects model The differentset-up configurations considered did not have anysignificant effect on the duration of the transient.However, this duration was significantly different indifferent target locations It can therefore be concludedthat the working space of the instrument is hetero-geneous also for the characteristics of the transient ofmeasurement It is expected that this conclusion hasnegative implications on the precision and unbiased-ness of the measurements obtained when using theinstrument for tracking moving points or movingobjects that the target points (or vector bars) areattached to Given a pre-specified configuration ofiGPS transmitters without any zone partitions havingbeen pre-established among them, if an object ismoving within an area of the working space of such
an iGPS, say area A, its position may be trackedcorrectly, because the transient is sufficiently shortthere But if the same movement of the same object istracked by the same iGPS in another area of the same
Trang 14iGPS working area, say area B, the system may not be
able to track its location correctly because the transient
may not yet have finished
Acknowledgements
This study is part of the research initiatives of The Bath
Innovative Design and Manufacturing Research Centre
(IdMRC), which is based in the Department of Mechanical
Engineering of the University of Bath and which is supported by
the United Kingdom Engineering and Physical Sciences
Research Council (EPSRC) In particular, this investigation
was carried out within the scope of the IdMRC research theme
‘Metrology, Assembly Systems and Technologies’ (MAST),
which is coordinated by Professor Paul Maropoulos
References
BS ISO 5725-1, 1994 Accuracy (trueness and precision) of
measurement methods and results – Part 1: General
Stan-dards Institution Available from: http://shop.bsigroup
com/ [Accessed 14 March 2010]
Craig, J.J., 1986 Introduction to robotics mechanics & control
Reading, MA, USA: Addison-Wesley
Estler, W.T., et al., 2002 Large-scale metrology – an update
Annals of the CIRP, 51 (2), 587–609
Faraway, J., 2005 Linear models with R Boca Raton, FL,
USA: Chapman & Hall/CRC
Faraway, J., 2006 Extending the linear model with R:
generalized linear, mixed effects and nonparametric
regression models Boca Raton, FL, USA: Chapman &
Hall/CRC
Hartley, R.I and Sturm, P., 1997 Triangulation Computer
Vision and Image Understanding, 68 (2), 146–157
Hedges, T.M., et al., 2003 Position measurement system and
method using cone math calibration United States
Patent US6535282B2, March
JCGM (Joint Committee for Guides in Metrology), 2008
JCGM:100:2008 (GUM:1995 with minor corrections)
Evaluation of measurement data – guide to the expression
BIPM – Bureau International des Poids et Mesures
Available from: http://www.bipm.org [Accessed 7 June
2009]
Liu, H., et al., 2007 Survey of wireless indoor positioning
techniques and systems IEEE Transactions on Systems,
Man, and Cybernetics – Part C: Applications and Reviews,
37 (6), 1067–1080
Liu, Z., Liu, Z., and Lu, B., 2008 Error compensation ofindoor GPS measurement In: C Xiong et al., eds.,Proceedings of the first international conference onintelligent robotics and applications (ICIRA2008), Part
II, 15–23 October 2008 Wuhan, People’s Republic ofChina Lecture Notes in Computer Science 5315 Berlin:Springer-Verlag, 612–619
Lourakis, M.I.A and Argyros, A.A., 2009 SBA: a softwarepackage for generic sparse bundle adjustment ACMTransactions on Mathematical Software, 36 (1), 1–30.Maisano, D., et al., 2008 Indoor GPS: system functionalityand initial performance evaluation International Journal
of Manufacturing Research, 3 (3), 335–349
Neter, J., et al., 1996 Applied linear statistical models 4th ed.New York: Irwin
Pinheiro, J.C and Bates, D.M., 2000 Mixed-effects models in
S and S-plus New York: Springer-Verlag
Piontek, H., Seyffer, M., and Kaiser, J., 2007 Improving theaccuracy of ultrasound-based localisation systems Per-sonal and Ubiquitous Computing, 11 (6), 439–449.Priyantha, N.B., Chakraborty, A., and Balakrishnan, H.,
2000 The Cricket location-support system In: ings of the 6th annual international conference on mobilecomputing and networking (MobiCom’00), 6–11 August.Boston, MA New York: ACM, 32–43
Proceed-R Development Core Team, 2009 Proceed-R: a language and
Foundation for Statistical Computing Available from:http://www.R-project.org ISBN 3-900051-07-0
Sae-Hau, C., 2003 Multi-vehicle rover testbed using a newindoor positioning sensor Thesis (Master’s) Massachu-setts Institute of Technology
Savvides, A., Han, C.C., and Strivastava, M.B., 2001.Dynamic fine-grained localization in ad-hoc networks
of sensors In: Proceedings of the 7th annual internationalconference on mobile computing and networking (Mobi-Com’01), 16–21 July Rome New York: ACM, 166–179.Triggs, B., et al., 2000 Bundle adjustment – a modernsynthesis In: B Triggs, A Zisserman, and R Szeliski,eds Proceedings of the international workshop on visionalgorithms (in conjunction with ICCV’99), 21–22 Sep-tember 1999, Kerkyra, Corfu, Greece Lecture Notes onComputer Science 1883 Berlin: Springer-Verlag, 298–372
Welch, G., et al., 2001 High-performance wide-area opticaltracking – the HiBall tracking system Presence: Tele-operators and Virtual Environments, 10 (1), 1–21.Wolf, P and Ghilani, C., 1997 Adjustment computations:statistics and least squares in surveying and GIS NewYork: Wiley
International Journal of Computer Integrated Manufacturing 499
Trang 15A real-time simulation grid for collaborative virtual assembly of complex products
X.-J Zhen, D.-L Wu*, Y Hu and X.-M Fan
CIM Institute, Shanghai Jiao Tong University, Shanghai, China(Received 9 April 2009; final version received 9 February 2010)Simulation of collaborative virtual assembly (CVA) processes is a helpful tool for product development However,existing collaborative virtual assembly environments (CVAE) have many disadvantages with regard to computingcapability, data security, stability, and scalability, and moreover it is difficult to create enterprise applications inthese environments To support large-scale CVAEs offering high fidelity and satisfactory interactive performanceamong various distributed clients, highly effective system architectures are needed In this paper, a collaborativevirtual assembly scheme based on grid technology is proposed This scheme consists of two parts: one is a grid-basedvirtual assembly server (GVAS) which can support parallel computing, the other a set of light clients which cansupport real-time interaction The complex and demanding computations required for simulation of virtualassembly (VA) operations, such as model rendering, image processing (fusion), and collision detection, are handled
by the GVAS using network resources Users at the light clients input operation commands that are transferred tothe GVAS and receive the results of these operations (images or video streams) from the GVAS Product data aremanaged independently by the GVAS using the concept of RBAC (role-based access control), which is secureenough for this application The key related technologies are discussed, and a prototype system is developed based
on the web services and VA components identified in the paper A case study involving a car-assembly workstationsimulation has been used to verify the scheme
Keywords: grid; collaborative virtual assembly; complex product; real-time collaborative simulation
Notation
CDM collision detection model
CVA collaborative virtual assembly
CVAE collaborative virtual assembly
CVA technology is used to develop complex products
such as automobiles and ships It provides an effective
experimental assembly environment for designers
working at different locations (Lu et al 2006), who
can exchange product data and discuss and verify the
assembly scheme to improve the previous design
scenario Many CVA systems or prototypes have
been built for product development (Bidarra et al
2002, Shyamsundar and Gadh 2002, Chen et al 2004,
Chryssolouris et al 2009) However, existing CVA
systems still have many disadvantages In general,
virtual environments (VEs) have no modelling tion, which means that products must generally bemodelled in a CAD environment, creating productmodels that cannot be imported into VE directly; as aresult, much preparatory work such as model transfor-mation must be done before the VA task can beperformed Moreover, in the context of expandingrequirements for assembly simulation of complexproducts, current CVA systems lack adequate comput-ing power Most CVA systems support only single-PC,not parallel, computing, which is seriously insufficient
func-to meet requirements For example, the frame rate forrendering a model of a whole car is about 2*6F/S(frames/second), which is not compatible with inter-active operation In addition, the computing resources
of all user nodes are allocated statically before the task
is begun, which limits the stability and extensibility ofthe system
Grid technology provides a new way to promotecollaborative virtual assembly using the concept ofsharing distinct resources and services This approachhas been applied successfully in areas of computerscience requiring massive computing, such as parallelcomputing and massive data processing, but less so inthe areas of design and manufacturing, especially when
*Corresponding author Email: wudianliangvr@gmail.com
Vol 23, No 6, June 2010, 500–514
ISSN 0951-192X print/ISSN 1362-3052 online
Ó 2010 Taylor & Francis
DOI: 10.1080/09511921003690054
Trang 16real-time computing is required However, the
char-acteristics of grid technology are a perfect fit for the
requirements of a CVA system, and CVA based on grid
technology offers many advantages with regard to
computing power, data security, stability, and
scal-ability, as well as ease of constructing enterprise
applications
In this paper, a collaborative virtual assembly
scheme based on grid technology is proposed, and a
prototype system called VAGrid (Collaborative
Vir-tual Assembly-based Grid) is developed based on web
service and VA components This system consists of
two parts: a grid-based virtual assembly server
(GVAS) and a set of light clients Computing tasks
are handled by the GVAS using resources available
over the internet, with users performing only simple
interactive operations using a graphical interface The
key related technologies are discussed in detail
The rest of this paper is organised as follows In
Section 2, related research on CVA and grid
technol-ogy is reviewed Section 3 describes the structure and
workflow of the system The representative features
and capabilities of VAGrid are described in Section 4
Section 5 provides a case study, and Section 6 states
conclusions and directions for future work
2 Related research
2.1 CVA
Many researchers have already conducted extensive
research into CVA, and significant results have been
achieved An internet-based collaborative product
assembly design tool has already been developed
(Shyamsundar and Gadh 2002) In this system, a new
assembly representation scheme was introduced to
improve assembly-modelling efficiency Liang has also
presented a collaborative 3D modelling system using
the web (Liang 2007) Lu et al developed a
collabora-tive assembly design environment which enabled
multiple geographically dispersed designers to design
and assemble parts collaboratively and synchronously
using the internet (Lu et al 2006) Web-based virtual
technologies have also been applied to the automotive
development process (Noh et al 2005, Dai et al 2006,
Pappas et al 2006)
These researchers have proposed various
ap-proaches to enable collaboration among multiple
designers, but the only interactive modes supported
by the CVA environment, such as chat channels, were
not found to be effective or intuitive Moreover, the
performance of these systems, especially when
sup-porting real-time assembly activity for complex
pro-ducts, was considered inadequate In an effort to solve
these problems, the relative performance of various
distribution strategies which support collaborative
virtual reality environments, such as client/servermode, peer-to-peer mode, and several hybrid modes,has been discussed (Marsh et al 2006) Theseresearchers proposed a hybrid architecture whichsuccessfully supported real-time collaboration forassembly For supporting the interactive visualisation
of complex dynamic virtual environments for trial assemblies, a dynamic data model has beenpresented, which integrates a spatial data set ofhierarchical model representations and a dynamictransformation mechanism for runtime adaptation(Wang and Li 2006) Based on this model, complexityreduction was accomplished through view frustumculling, non-conservative occlusion culling, and geo-metry simplification
indus-2.2 Grid computingGrid computing was first proposed by Ian Foster in the1990s (Foster and Kesselman, 1999) It aims to shareall the resources available on the internet to form alarge, high-performance computing network An im-portant characteristic of a grid-computing environ-ment is that a user may connect to the grid-computingsystem through the internet, and the grid-computingsystem can provide all kinds of services for the user.Some grid toolkits, such as Globus (Foster 2006),Legion (Grimshaw and Natrajan 2005), and Simgrid(Emad et al 2006), which provide basic capabilitiesand interfaces for communication, resource location,scheduling, and security, primarily use a client-serverarchitecture based on centralised coordination Thesegrid-computing applications use client-server architec-tures, in which a central scheduler generally managesthe distribution of processing to network resources andaggregates the processing results These applicationsassume a tightly coupled network topology, ignoringthe changing topology of the network, and are suitablefor large enterprises and long-time collaboration.However, in the area of production design,especially for real-time design simulation, few relevantstudies have been reported Li et al (2007a, 2007b)presented the concept of a collaborative design grid(CDG) for product design and simulation, and acorresponding architecture was set up based on theGlobus toolkit, version 3.0 Fan et al (2008) presented
a distributed collaborative design framework using ahybrid of grid and peer-to-peer technologies In thisframework, a meta-scheduler is designed to accesscomputational resources for design, analysis, andprocess simulation, which can help in resourcediscovery and optimal use of resources To meetindustrial demands for dynamic sharing of variousresources, Liu and Shi (2008) proposed the concept ofgrid manufacturing According to the characteristics ofInternational Journal of Computer Integrated Manufacturing 501
Trang 17the resources to be shared and the technologies to be
used, grid manufacturing distinguishes itself from
web-based manufacturing by providing transient services
and achieving true interoperability To support
real-time design simulation, a hybrid of HLA (high-level
architecture) and grid middleware was used (Rycerz
et al 2007) These systems can solve some problems
from a special viewpoint, but to support collaborative
simulation design for complex products, further
research is needed
3 Structure and workflow of VAGrid
3.1 Functions and performance requirements
Complex products such as automobiles and ships have
similar features: (1) numerous components, (2) complex
structure, (3) high research and development costs, and
(4) requirement for a large number of designers Some
form of collaborative design has been used by most
companies making complex products However, to date
only some indirect applications, such as physically
co-located meetings and CAD-based conferencing, have
been attempted Unfortunately, these approaches have
many deficiencies with regard to service efficiency,
application effectiveness, and convenience (Trappey
and Hsiao 2008) In contrast, this research targets the
entire distributed team-design scenario, involving all
the participating designers and supervisors A direct
way is needed to enable geographically distributed
designers to assemble their individual designs together
in real time To make this possible, the following set of
system requirements should be incorporated into the
development process:
Ease of use System configuration, including
allocation of computing resources, is done
automatically, and the user can obtain the
desired results by means of simple interactions
Convenient data conversion Product data
re-quirements in the virtual environment are
differ-ent from those in the CAD environmdiffer-ent, so data
conversion is required This process should be
simple or automatic and should support common
CAD software such as UG, CATIA, and PRO/E
Good real-time scene rendering performance
The virtual scene at each user station should be
responsive enough to meet the needs of
inter-active operation, which requires powerful
com-puting resources to perform model rendering,
collision detection, and similar tasks
Strong data security These systems have many
users, including product designers, assembly
technologists, and even component suppliers
User authorisation or similar measures must be
taken to maintain data security
Multi-user scene synchronisation Consistency ofthe scene across all the system nodes must bemaintained This means that when the scene at aparticular node changes because of user manip-ulation, the information must transfer to all othernodes and update their scenes synchronously Multi-modal interaction Users interact with VEthrough multiple modalities for different hard-ware, such as the data glove and FOB (flock ofbirds), as well as the common keyboard andmouse as in CAD
3.2 Structure of a grid-based virtual assemblysimulation for complex products
The basic idea of collaborative assembly is shown inFigure 1 Geometric modelling and assembly design ofproducts are carried out in a CAD system by designers
at different locations Then geometric and assemblyinformation are transferred to the collaborative assem-bly environment by means of a special data interface.All designers can share the same virtual assemblyverification environment collaboratively to performassembly analyses and assembly process planning forproducts, as well as assembly operation training Thesystem can run over the internet or on a LAN (LocalArea Network), or particularly on a company intranetfor reasons of performance and stability
In the context of the functional requirements andthe basic concept described above, several structurescan be used A distributed parallel architecture (Zhen
et al 2009) based on HLA and MPI (Message PassingInterface), with many client nodes and one masternode, has been used, in which each client node wassupported by PCs in a LAN The advantage of thisapproach was fast execution speed, especially atinitialisation because data were saved at each localposition However, this approach also had manydisadvantages: (1) data protection is difficult to achievefor distributed data; (2) manual configuration of thesupporting resource nodes and all the resource nodes isstatic and unreliable; (3) the system can support onlyone task at a time and is therefore not suitable forgeneral use For these reasons, an implementationscheme based on grid technology has been proposed,
as shown in Figure 2 The computing resources neededduring the assembly process for tasks such as render-ing, collision detection, and image processing (thesemay be any idle resource available on the internet or acorporate intranet) are managed dynamically by thegrid Multi-tasking can be supported, but there is onlyone virtual assembly scene for each task, maintainedand updated by the grid Product data and evaluationresults are stored at an independent location Theconfiguration of each user is simple, involving only an
Trang 18Figure 2 CVA solution based on grid.
I/O device or a multi-channel stereo system if using an
immersive virtual environment The scene at each user
client station is a sequence of continuous images or a
video stream that a user can ‘see’ at his location Users
are classified as an assembly task manager or a
participant
3.3 Architecture and workflow of VAGridThe system architecture of VAGrid is illustrated inFigure 3 All users first register with the system usingthe grid portal Once a user has finished designingassemblies or subassemblies using a CAD system, theproduct models will be submitted to a given location inInternational Journal of Computer Integrated Manufacturing 503
Trang 19the grid database The assembly-task manager accesses
the database with a valid authorisation, prepares the
corresponding data in the VE, and then sets up and
launches a new task The system will search
auto-matically for resources to support this task If
insufficient resources are available, a rejection message
will be generated; otherwise, a collaborative assembly
environment will be established The related users then
can join the task and enter the CVE to perform
assembly verification together
The system consists of five key parts:
(1) Grid platform management: the basis of the
system This part provides management of
communication status and computing
re-sources for the CVE Management of users,
tasks, and resources is also performed by this
part
(2) Product management: convenient data
trans-formation is an essential requirement for
complex products, and management of these
data in a distributed grid environment is a
complex task
(3) Remote real-time collaboration: collaboration
based on virtual user models in a virtual
environment and remote real-time interactive
assembly operations are handled by this
module
(4) Virtual scene graphics management: this
mod-ule dynamically maintains the virtual scene
displays, including assembly based on solution
of constraints and constraint navigation The
design and implementation of this module have
already been described in the paper on IVAE
(Yang et al 2007)
(5) Tools for evaluation and analysis: a set of tools
for distance and dynamic gap measurement,
assembly path and sequence tracking, dynamiccollision detection, and assembly process eva-luation report generation, which can be inter-actively used by each user
4 Representative features and capabilities4.1 Grid management
The grid platform is the main framework of thesystem, which handles the management of users(registration, logon, and user status monitoring), tasks(startup, maintenance), and resources (monitoring,dynamic configuration)
User managementUsers can be classified into two types: the taskmanager and ordinary users The former is incharge of the whole task and holds the highestauthority; the latter is the operator in the virtualenvironment Although there is only one virtualenvironment, with no scene-synchronisation pro-blems among client nodes, the operating author-ity of each user is different, and thereforeproblems with operating collisions do arise andmust be solved In this case, the manager definesroles related to the task and sets levels ofauthority according to these roles When a userjoins a task, he enters into a role and operateswith its corresponding authority
Task managementUnlike existing systems, the VAGrid system cansupport multiple tasks running in parallel Eachtask has its own group of users, simulation data,and supporting resources, all managed uniformly
by the task scheduling module The workflow isshown in Figure 4 A task manager sets up a newtask and submits it to the task queue Theresource scheduling module queries the resourceand starts the task by calling CVA services, whichare the basic grid services deployed among thegrid nodes, such as virtual scene graphics manage-ment, model rendering, and collision detection Resource management
This is the most complex part of the system.Large amounts of real-time data must beprocessed, and large amounts of many kinds ofcomputing resources, for example for rendering,are required These resources are distributedamong the computing nodes on the internet Aresource cannot be used before a plug-in isinstalled, which is a small program that encap-sulates all the resources with the same kind ofaccess interface, as shown in Figure 5 Each noderegisters with the registration centre with an
Trang 20attributes message When a node fails, its work
will be delivered to new resource nodes
dynamically
4.2 Data management
Data management (DM) is one of the key factors
which provide a flexible and extensible mechanism for
data manipulation in the grid and data delivery to grid
sites based on the participating entities’ interests
Depending on the system architecture and operation,
users need only to upload CAD models related to the
task to grid data-storage nodes Once in the virtual
environment, CAD models cannot be loaded directly,
because some preparatory work, including data
storage, processing, and access, is required when
moving from CAD to the virtual environment
A virtual assembly task requires three main types
of data: product data, including a CAD model and
data in the virtual environment; information on
assembly tools and processes (saved as a file); and
simulation results, as shown in Figure 6 The first type,
product data in the virtual environment, includes the
display model, the part information model, assembly
hierarchy information (saved as a file), and thecollision detection model (CDM) The second datatype consists of important auxiliary information forevaluation Simulation results include elements such as
an assembly process video, an assembly sequence, apath information file, and an evaluation report.Among these data, product data is managed andmaintained by, and only by, the task manager, becausesecurity requirements dictate that he has the soleauthority to write to the database; others can onlyupload data The second data type is shared by allusers of a particular task Assembly results are saved in
a folder accessible to users and can be downloaded bythem consistently with their authorisations
Some preparatory work must also be done for datatransformation Assembly hierarchy information andconstraint information can be obtained from the CADenvironment after a static interference check Thedisplay model is the ‘visible’ part, and the CDM is used
to perform a dynamic interference check; these modelsmust be transformed from CAD models A specialinterface based on the ACIS solid modelling kernel hasbeen developed for this complex task Each CADmodel is transformed into a sat file, from which thedisplay model and the collision detection model can beobtained
Display model: several types of model can beused such as step, flt (Open Flight), etc All ofthese are polygon models with colour, texture,and other appearance properties Several typesare supported by the system, and all the modelswill be transformed automatically before thesystem starts
CDM: This is an internally defined type whichcan be transformed by a polygon model asdescribed in the last step and then simplified into
a hierarchical bounding box depending on thecollision-detection precision
International Journal of Computer Integrated Manufacturing 505
Trang 21Figure 7 shows a display model of a car chassis part
and its collision models at various precisions All these
data are saved into grid data-storage nodes by the
MYSQL system When the system starts, all the
computing resource nodes will load their
correspond-ing data accordcorrespond-ing to task requirements
4.3 Multi-user collaborative interactive operations
Unlike most existing CVA systems, users at each client
send operation commands to a single ‘remote’ virtual
scene and then receive the results of these operations,
which are scene fragments as seen from the users’
viewpoints It frequently occurs that many users stay at
their assembly workstations, observing with two eyes,
operating with two hands, where each of them can feel
each other’s presence, but not see each other This is
real intuitive and effective collaboration
4.3.1 Co-operation based on virtual user models
According to the system architecture, users at each
client only send operating commands and receive
simulation results over the network; they do not save
product data nor perform computing tasks The
management of users and of users’ operating
com-mands is very heavy work Therefore, a virtual-user
model is used, and each real user has a corresponding
virtual user in the virtual environment The virtual user
processes, not only the current commands from the
user, but the user’s attribute information such as
location, viewpoint parameters, and so forth The
architecture of the virtual user model is shown in
Figure 8
Here, in the virtual user attributes information,
‘Type’ can be task manager or general collaborative
user ‘RoleID’ is the ID number of the role which theuser takes on in the task; for example, if the ID of doordesigner is ‘6’, then the ‘RoleID’ of a virtual user acting
as a door designer is ‘6’ Viewpoints are the ‘eyes’ of avirtual user in the virtual assembly environment; soundchannels are his ‘ears’, operations are his two hands,and the supporting resource records the list ofcomputing nodes which provide services to this user.The system set up a new virtual user object auto-matically when a client joins in
In VAGrid, there is no synchronisation problemamong the clients because there is one single virtualscene, but manipulation conflicts still exist, forexample when:
An object is grasped by two or more userssimultaneously; or
Two objects that have an assembly constraintrelation are manipulated by two userssimultaneously
In the former case, if the object has been grasped,refuse all grasp requests; otherwise continue Then thelevel of authority and the role of the user are checked;
if they are inappropriate, refuse him; otherwise, accepthis application Finally record the exact time, so as togive precedence to the earlier user In the latter case,when a user tries to assemble one part with another,first determine whether it is a free part; if yes, thencarry out the assembly operation, otherwise continuechecking Then determine whether the operator is thesame person as the user If yes, this means that the user
is grasping two objects simultaneously, and assemblycan be continued using a two-handed assemblyprocess; if no, a prompt message that the object isalready being operated on will be generated
Trang 224.3.2 Remote real-time interactive assembly operation
Many kinds of equipment can be used for interactive
operation, such as FOB (Flock of Birds, a kind of
position-tracking device), data glove,
mouse/key-board, and other I/O equipment While in the
VAGrid system, the data and its processing program
all reside at the grid site and the result at the client
site is the rendered scene image To achieve
interac-tion, the user’s command must be sent to the grid site
in real time An image-based remote interactivescheme is provided; the basic workflow and hardwareconfiguration of the client are shown in Figure 9 Themanipulator command is first coded and sent to thegrid over the network, where it is then decoded by thesystem and sent to the assembly simulation scenenodes Then the parameters of related virtual userchanges and all rendering nodes within ‘RNodeList’will be updated in response In the same way, othercomputing nodes will also be updated, and a newvideo image will arrive at the client workstation Inaddition, users can communicate with each other bysending text messages
Using this scheme, users can manipulate virtualobjects conveniently according to their hardwarestatus Because the inputs and outputs are separate,multi-channel immersive stereo can be easily achieved,
as shown in Figure 9
4.3.3 Interactive operation using virtual hands or byassembly using tools and equipment in the virtualenvironment
4.3.3.1 Interactive operation using virtual hands.There are two virtual hand models in the virtualscene, corresponding to the real hands of the user.Virtual hands are driven synchronously by real handswith the same position and orientation The basicmanipulation of an object (part or assembly) in thevirtual scene includes grasping, moving, con-straint confirmation, motion navigation, and objectrelease
Grasping: a user who wants to pick up an objectsends an application request over the grid to the virtualscene and waits for feedback information If the objecthas been grasped, either a message, ‘cannot be
International Journal of Computer Integrated Manufacturing 507
Trang 23grasped now,’ will be generated or the grasp will be
successful
Moving: after a successful grasp, the object is
affixed to the virtual hand and can be moved to
wherever the user wants The whole scene will be
updated in real time
Constraint confirmation and motion navigation:
when the object has been moved near to the desired
position, a marker appears (the location label of the
object), and the user can perform a gesture to confirm
the precise mating location based on assembly
constraint recognition
Object Release: the client (user) sends a command
to release the object, and the relation between the
object and the virtual hand is broken
4.3.3.2 Assembly using tools and equipment Taking
the real assembly environment into account, including
assembly tools, fixtures, and assembly equipment, is an
important aspect of virtual assembly simulation
Interactive assembly operations using assembly tools
should be provided In the assembly process for a
complex product, equipment can be classified into
three types: automatic, semi-automatic, and manual
tools, as shown in Figure 10 Assembly tools act as a
special ‘assembly’, inheriting all the attributes,
including the collaborative attributes, of the assembly
object Similarly, a part of an assembly tool inherits all
the attributes of a part object In addition, the
assembly tool has particular attributes that make it
able to manipulate other objects
The working process of a virtual tool is similar to
that in the real world Most tools select their target
object dynamically by colliding with it, except for some
special tools When a tool operates, for example a
screw tool, it first selects an object by colliding with a
bolt, then creates the axis constraint between the tool
and the target, and finally the tool can be navigatedusing the axis constraint to the ending facet which isthe final position
4.4 Tools for evaluation and analysis in assemblysimulation
To evaluate the assembly operation effectively, severalauxiliary tools are provided, such as distance anddynamic gap measurement, assembly path record andreplay, trajectory display, and an assisted evaluationreport
Distance and dynamic gap measurementThe distance between two points, a point and aline (a line segment), a point and a facet, or twofacets can be measured interactively Dynamicgap computation for two models is also pro-vided The user can select the measured object,and the gap value will be shown in the virtualscene in real time
Assembly path and sequence
In the virtual environment, the assembly quence and the paths followed by the parts arefreely and arbitrarily selected using the dataglove, but there must be an optimal sequence andset of paths Recording and replaying thesequence of motion and the paths followed can
se-be used to optimise the assembly process.Trajectory display is used as an intuitive way toshow the path of motion of a component In theVAGrid system, replaying the assembly recordmust be agreed upon by all the users becausethere is only one virtual scene
Part interference and collisionThe basic function of virtual assembly is to verifythe design intent But here the interferences
Trang 24among tools and fixtures can also be calculated,
except for those among parts
Assembly process evaluation report
An evaluation report will be created
automati-cally when an assembly task is finished The
information in the report is divided into three
parts The first contains general statistical
in-formation about the whole assembly task The
second contains status information for all the
parts The last one contains interference
informa-tion, which shows the interference of parts in the
assembly process using a special symbol
5 Case studies
With the aim of verifying the feasibility of the
system while taking into account the assembly
space, an assembly simulation was performed usingthe VAGrid system on a classical workstation tomodel a rear suspension and front suspension Thecontent of the simulation includes the layout offixtures and tools, the operating space, and thedynamic interferences during the assembly process
An assembly technologist acted as task managerand was in charge of setting up the task andcoordinating all the participants The participantsincluded designers of the rear suspension and frontsuspension, designers of tools and fixtures, andassembly operators
To use resources over the internet, plug-ins must beset up at each resource node In this case, computingresources were used over an enterprise intranet, andproduct data were saved in the internal database of theenterprise The detailed steps followed using VAGridcan be described as follows
constraint (d) Run navigated by the axis
International Journal of Computer Integrated Manufacturing 509
Trang 255.1 RA750 car assembly workstation simulation
5.1.1 Registration and logon for users
Each user first registers and obtains an account The
task manager sets up a simulation task team and
defines roles and authority related to the present task
Figure 11 shows the interface for a user registering at a
portal
5.1.2 Preparation for simulation
Simulation initialisation consists of three steps: (1) all
CAD models are uploaded to the grid database by
designers from their dispersed locations; (2) the task
manager accesses the database, transforms the models
for simulation, and saves them to a given location; (3)
the folder path together with the car assembly
information are written into a simulation file (.xml as
the default format) A segment of the simulation file is
shown in Figure 12
5.1.3 Simulation task initialisation
The task manager starts up the task using the file
described above, and all related service nodes run
automatically to support this task: assembly scene
resources, rendering resources, and so forth Once allrelated resources are running, the status of this task isset to be a collaborative task, and users can join in Thewhole assembly scene (at a default viewpoint) can bebrowsed by opening an interactive interface, shown inFigure 13, as part of the assembly scene at the taskmanager workstation Other users can apply for a roleand join the task
5.1.4 Multi-user collaborative assembly operationOnce all the related users have joined the task, thecollaborative assembly operation will be performedunder the coordination of the task manger Severalinteractive modes can be supported by the system,among them the ordinary keyboard and mouse as inCAD, the 5DT Cyberglove and FOB (flock of birds),and the Cyber Glove/Touch glove with haptic sensingand FOB Figure 14 illustrates two interaction modes:(a) a user operating with a keyboard and mouse; (b) auser operating with the Cyber Glove/Touch glove andFOB
5.1.5 Aids to evaluation and analysisAssembly evaluation tools can be used at any pointduring the operation process as needed To access these
Trang 26tools, the user must display the space analysis menu
and select ‘distance measure’ as the analysis mode In
this case, two modes will be used One displays the
dynamic gap in real time between the grasped object
and other important objects in the environment, as
shown in Figure 15(a) The other displays the
minimum distance between the given object and
another object at final assembly status, as shown in
Figure 15(b) When the user displays the assembly path
management menu and selects a path record, the
system generates a recording of the assembly process,
including assembly sequence and path of motion By
replaying the path and the trajectory display, the best
path can be determined In addition, trajectory display
with body mode can be used to check the tools’
operational space Figure 16 shows the trajectory
of moving a rear suspension using interactive
equipment
A design evaluation report is generated after the
assembly task is completed and will be saved to the
grid database in the appropriate folder depending on
the users A segment of a report of this type for anassembly operation is shown in Figure 17 A total of
12 interferences occurred The image shows theinterference between the screw tool and the frontsuspension fixture, which left too small a space for theoperation The fixture will be modified later in theCATIA environment and re-verified in VAGrid.Eventually all the interferences will be eliminated
monitor scene at manager client node (a) Segment ofresource configure file (b) Part of assembly scene atmanager client
5.2 Case discussion and analysisAssembly technologist, related designer and operatorjoin in the evaluation and verify task The applicationresult shows that VAGrid is an effective tool forassembly process evaluation
International Journal of Computer Integrated Manufacturing 511
Trang 275.2.1 VAGRID prototype system
Prototype system can meet the requirements of
multi-user real-time collaborative assembly
simula-tion based on grid technologies GVAS part
under-takes scene visualisation and simulation computing
task by configuring and starting the grid resources
automatically
(1) Interactive assembly operations smooth and
no lag feeling during simulation The scene
update frequency is greater than 17 F/S (frames/
second) for stereo display mode (34 F/S for
non-stereo mode), which is much higher than
efficiency using single PC (about 2*6 F/S)
(2) Consistency of the scene at all user clients is
well maintained with the only scene manager
node at grid
users collaboratively assembly (b) Auxiliary Manipulator for
rear-suspension
computation (a) Dynamic gap display between suspension spring and other parts (b) Gap between oilpipe and the protective board
interactive equipment
Trang 28(3) Only the task manager can access the database
at GVAS, and then data security and
consis-tency can be guaranteed
5.2.2 Assembly process evaluation application
The simulation process is carried through according to
the assembly process requirements using the assistant
tool
(1) The assembly sequence is reasonable and
there is no component that cannot be
assembled
(2) The product assembly path is feasible and no
interference by reason of design occurs
(3) There is sufficient room for manoeuvre except
for front suspension fixture, the size of the
upper frame of which needs to reduce
6 ConclusionsCVA technologies are being applied more and more inindustry, but still encounter many problems In thispaper, a CVA system called VAGrid based on gridtechnology is presented, in which the heavy computingtasks such as model rendering and image processingare performed over the grid using resources availableover the internet Product data are managed indepen-dently by the grid with role-based access authorisation,which is secure enough for this purpose Users need toperform only simple interactive operations using agraphic interface Compared with existing CVAsystems, VAGrid offers many advantages with regard
to computing capability, data security, and scalability,and moreover is well suited to constructing enterpriseapplications
However, VAGrid still has some disadvantages inapplication It relies on having a high-performancenetwork, and plug-ins must first be set up for a PC to
be used as a computing resource node In addition, agood dynamic resource configuration mechanism isessential because otherwise part of a virtual scene willdisappear if a node does not work; this mechanism isbeing studied further
Assembly evaluation using CVA is still a luxuryapplication which is mostly used by large companies.Providing service for other enterprises using VAGrid is
a research topic for the future
Acknowledgements:
This work has been supported by a Key Project grantfrom the National Nature & Science Foundation ofChina (grant no 90612017) and a Key Project grantfrom the Science & Technology Commission ofShanghai Municipality (grant no 08DZ1121000)
References
Bidarra, R., Kranendonk, N., Noort, A., and Bronsvoort,W.F., 2002 A collaborative framework for integratedpart and assembly modeling In: Proceedings 7th ACMsymposium on solid modeling and applications, ACMPress, 389–400
Chen, L., Song, Z., and Feng, L., 2004 Internet-enabled time collaborative assembly modeling via an e-assemblysystem: status and promise Computer-Aided Design, 36(9), 835–847
real-Chryssolouris, G., Mavrikios, D., Pappas, M., Xanthakis, E.,and Smparounis, K., 2009 A web and virtual reality-based platform for collaborative product review andcustomisation In: L Wang and A.Y.C Nee, eds.Collaborative design and planning for digital manufactur-ing London: Springer, 137–152
Dai, K.Y., Li, Y.S., Han, J., Lu, X.H., and Zhang, S.S.,
2006 An interactive web system for integrated dimensional customization Computers in Industry, 57(8–9), 827–837
node (a) Segment1 (task statistic and status information) of
an evaluation report (b) Segment2 (interference information)
of an evaluation report
International Journal of Computer Integrated Manufacturing 513
Trang 29Emad, N., Shahzadeh, S., and Dongarra, J., 2006 An
asynchronous algorithm on the NetSolve global
comput-ing system Future Generation Computer Systems, 22 (3),
279–290
Fan, L.Q., Senthil, K.A., Jagdish, B.N., and Bok, S.H., 2008
Development of a distributed collaborative design
frame-work within a peer-to-peer environment CAD
Computer-Aided Design, 40 (9), 891–904
Foster, I., 2006 Globus toolkit version 4: software for
service-oriented systems Journal of Computer Science
and Technology, 21 (4), 513–520
Foster, I and Kesselman, C., 1999 The grid: Blueprint for a
future computing infrastructure San Francisco: Morgan
Kaufmann
Grimshaw, A and Natrajan, A., 2005 Legion: lessons
learned building a grid operating system Proceedings of
the IEEE on grid computing, 93 (3), 589–603
Li, Z., Jin, X.L., Cao, Y., Zhang, X.Y., and Li, Y.Y., 2007
Architecture of a collaborative design grid and its
application based on a LAN Advances in Engineering
Software, 38 (2), 121–132
Li, Z., Jin, X.L., Cao, Y., Zhang, X.Y., and Li, Y.Y., 2007
Conception and implementation of a collaborative
manufacturing grid International Journal of Advanced
Manufacturing Technology, 34 (11–12), 1224–1235
Liang, J.S., 2007b Web-based 3D virtual technologies for
developing a product information framework
Interna-tional Journal of Advanced Manufacturing Technology, 34
(5–6), 617–630
Liu, Q and Shi, Y., 2008 Grid manufacturing: a new
solution for cross-enterprise collaboration International
Journal of Advanced Manufacturing Technology, 36 (1–2),
205–212
Lu, C., Fuh, J.Y.H., Wong, Y.S., Qiu, Z.M., Li, W.D., and
Lu, Y.Q., 2006 Design modification in a collaborative
assembly design environment Computing and
Informa-tion Science in Engineering, 6 (2), 200–208
Marsh, J., Glencross, M., Pettifer, S., and Hubbold, R.,
2006 A network architecture supporting consistent richbehavior in collaborative interactive applications IEEETransactions on Visualization and Computer Graphics, 12(3), 405–416
Noh, S.D., Park, Y.J., Kong, S.H., Han, Y.-G., Kim, G., andLee, K.I., 2005 Concurrent and collaborative processplanning for automotive general assembly InternationalJournal of Advanced Manufacturing Technology, 26 (5),572–584
Pappas, M., Karabatsou, V., Mavrikios, D., and louris, G., 2006 Development of a Web-based collabora-tion platform for manufacturing product and processdesign evaluation using virtual reality techniques Inter-national Journal of Computer Integrated Manufacturing,
Chrysso-19 (8), 805–814
Rycerz, K., Tirado-Ramos, A., Gualandris, A., PortegiesZwart, S.F., Bubak, M., and Sloot, P.M.A., 2007.Interactive N-body simulations on the grid: HLA versusMPI International Journal of High-Performance Comput-ing Applications, 21 (2), 210–221
Shyamsundar, N and Gadh, R., 2002 Collaborative virtualprototyping of product assemblies over the Internet.Computer-Aided Design, 34 (10), 755–768
Trappey, A and Hsiao, D., 2008 Applying collaborative designand modularized assembly for automotive ODM supplychain integration Computers in Industry, 59 (2–3), 277–287.Wang, Q.H and Li, J.R., 2006 Interactive visualization ofcomplex dynamic virtual environments for industrialassemblies Computers in Industry, 57, 366–377
Yang, R.D., Fan, X.M., Wu, D.L., and Yan, J.Q., 2007.Virtual assembly technologies based on constraint andDOF analysis Robotics and Computer-Integrated Manu-facturing, 23 (4), 447–456
Zhen, X.J., Wu, D.L., Fan, X.M., and Hu, Y., 2009 Distributedparallel virtual assembly environment for automobiledevelopment Assembly Automation, 29 (3), 279–289
Trang 30Mastering demand and supply uncertainty with combined product and process configuration
C.N Verdouwa*, A.J.M Beulensb, J.H Trienekenscand T Verwaarta
a
(Received 18 June 2009; final version received 15 January 2010)The key challenge for mastering high uncertainty of both demand and supply is to attune products and businessprocesses in the entire supply chain continuously to customer requirements Product configurators have proven to bepowerful tools for managing demand uncertainty This paper assesses how configurators can be used for combinedproduct and process configuration in order to support mastering high uncertainty of both supply and demand Itdefines the dependence between product and process configuration in a typology of interdependencies Theaddressed dependences go beyond the definition phase and also include the effects of unforeseen backend eventsduring configuration and execution Based on a case study in the Dutch flower industry, a conceptual architecture isproposed for coordination of these interdependencies and development strategies are identified
Keywords: concurrent engineering; configuration; ERP; flower industry; mass customisation; supply chainmanagement
1 Introduction
Mastering both demand and supply uncertainty is a
key challenge for many companies Markets are
increasingly turbulent and also the vulnerability of
production and logistics processes is growing The
management of uncertainty has been addressed as an
essential task of supply chain management (SCM)
(among others by Davis 1993) The well-known
bullwhip effect shows that amplification of demand
uncertainty can be reduced by supply chain
coordina-tion (Lee et al 1997) There are two main categories of
supply chain uncertainties: i) inherent or high frequent
uncertainties arising from mismatches of supply and
demand; ii) uncertainties arising from infrequent
disruptions to normal activities, such as natural
disasters, strikes and economic disruptions (van der
Vorst and Beulens 2002, Kleindorfer and Saad 2005,
Oke and Gopalakrishnan 2009) This paper is
concerned with the first category of uncertainties,
which can either be demand- or supply-related
(Lee 2002)
For coping with the addressed uncertainties, SCM
literature initially has focused on creating so-called
lean supply chains that efficiently push products to the
market Lean supply chains build upon reduction of
demand uncertainty, especially by product
standardi-sation Customers must choose from a fixed range of
standard products that are made to forecast in high
volumes Business processes in lean supply chains can
be highly automated by Enterprise Resource Planning(ERP) systems (Davenport and Brooks 2004)
In the late 1990s, the then dominant approach ofleanness was criticised more and more It was arguedthat in volatile markets it is impossible to removeuncertainty Companies therefore should accept differ-entiation and unpredictability and focus on betteruncertainty management Agility was proposed as analternative approach that aims for rapid response tounpredictable demand in a timely and cost-effectivemanner (Fisher 1997, Christopher 2000) It is founded
on a mass customisation approach that combines theseemingly contradictory notions of flexible customisa-tion with efficient standardisation (Davis 1989, Pine
et al 1993, Chandra and Kamrani 2004) This is byfabricating parts of the product in volume as standardcomponents, while achieving distinctiveness throughcustomer-specific assembly of modules (Duray et al.2000)
Besides product modularity and flexible assemblysystems (cf Molina et al 2005), product configuratorsare addressed as important enabling technologies(Duray et al 2000, Zipkin 2001, Forza and Salvador2002) Product configurators provide an interface forrapid and consistent translation of the customer’srequirements into the product information needed fortendering and manufacturing (Sabin and Weigel 1998,
*Corresponding author Email: cor.verdouw@wur.nl
International Journal of Computer Integrated Manufacturing
Vol 23, No 6, June 2010, 515–528
ISSN 0951-192X print/ISSN 1362-3052 online
Ó 2010 Taylor & Francis
DOI: 10.1080/09511921003667706
Trang 31Forza and Salvador 2002, Tseng and Chen 2006,
Reinhart et al 2009)
Until then, SCM focused on strategies for coping
with demand uncertainty Lee (2002) was one of the
first who stressed the impact of supply uncertainty on
supply chain design Supply chains characterised by
high supply uncertainty require the flexibility to deal
with unexpected changes in the business processes
Disturbances of logistics, production or supply of
materials should rapidly be observed and lead to
process changes, including re-planning and
re-scheduling, purchasing new material, hiring alternative
service providers or negotiating new customer
require-ments The rigid planning and scheduling systems of
traditional ERP systems may cause problems in this
type of supply chain (Akkermans et al 2003, Zhao and
Fan 2007) Modular software approaches, in particular
service-oriented architecture (SOA), have been
pro-posed to overcome these limitations In these
ap-proaches, process models guide the workflow planning
and execution in run-time information systems This
puts the emphasis on process configuration to achieve
the required backend flexibility Process configuration
supports a rapid and consistent specification of the
workflow that is needed to fulfil specific customer
orders (Schierholt 2001 and others) For example, local
deliveries from stock follow a different workflow than
exports that are produced to order Moreover, it
supports reconfiguration of the workflow in case of
unexpected supply events, e.g components that were
originally planned to be produced can be re-planned to
be purchased
Supply chains characterised by both uncertain
demand and supply require a combination of
respon-siveness to changing demand and the flexibility to deal
with unexpected changes in the business processes
Following Lee (2002), in this paper the term ‘agility’ is
used to characterise these types of supply chains In
agile supply chains, demand requirements and supply
capabilities, i.e products and processes including
resources, should be continuously attuned Therefore,
both front-office and back-office systems need to be
flexible and smoothly integrated This paper explores
the application of configurators to both products and
processes to achieve this
The majority of the existing configuration research
focuses either on product or process configuration
However, interdependence among product and process
configuration is relatively under-researched (cf Jiao
et al 2007, Chandra and Grabis 2009) A literature
review, which is presented later, shows that available
literature on this subject focuses on the definition
domain, i.e translation of customer requirements to an
integrated design of products and manufacturing
processes (Jiao et al 2000, de Lit et al 2003, Jiao
et al 2005, Bley and Zenner 2006) However, thepresence of supply uncertainty also results in a highmutual dependence after the definition phase Duringconfiguration and execution, the effects of unforeseenbackend events on the defined product and fulfilmentprocesses must continuously be evaluated, based onthe actual state of the required resources No research
is found that provides an integrated consideration ofthe interdependences during definition, configurationand execution, nor that develops the correspondinginformation architecture for coordination of thisinterdependence using configurators
The present research aims to contribute to this gap
by assessing how configuration software can be usedfor combined product and process configuration tosupport mastering high uncertainty of both supply anddemand More specifically, it aims to: i) identify theinterdependences between product and process config-uration; ii) design an information architecture forcoordination of this interdependence using configura-tors; iii) identify configurator development strategies.The focus is on the order fulfilment cycle that startswith configuring orders in interaction with customersand ends with delivering the finished goods (Lin andShaw 1998, Croxton 2003)
In the remainder of this paper, first an account isgiven of the applied research method Next, the paperintroduces problem context of the case firm, which is
a typical example of a company operating in agilesupply chains Subsequently, an overview is provided
of the literature about the use of configurators forproducts and processes and a typology of its inter-dependencies is defined The case study results arethen presented The paper concludes by addressingchallenges for future development and summarisingthe main findings
2 Research methodThe research used a design-oriented case study method
to answer the research question addressed in Section 1.Design-oriented research aims to develop a body ofgeneric knowledge that can be used in designingsolutions to management problems (van Aken 2004)
It is a foundational methodology in informationsystems research (Hevner et al 2004) Design-orientedresearch is typically involved with ‘how’ questions, i.e.how to design a model or system that solves a certainproblem A case-study strategy fits best for this type ofquestions, in particular in case of complex phenomenathat cannot be studied outside its context (Benbasat
et al 1987, Yin 2002) This characterises the presentresearch, because it focuses on the interdependencesbetween product configuration, process configurationand the planning and control of fulfilment Therefore,
Trang 32it was chosen for an in-depth explorative case study
research that puts the different related topic areas into
context In such a case study, it makes sense to focus
on an extreme situation that clearly highlights the
process of interest (Eisenhardt 1989, Yin 2002) In
present research, this is the existence of supply
uncertainty in addition to demand uncertainty
There-fore, the present authors searched within a sector that
is inherently involved with high supply uncertainty, i.e
the Dutch flower industry Firms in this sector face
high supply uncertainty because of the dependence on
the growth of living materials Production processes
are, therefore, vulnerable to weather conditions, pests
and other uncontrollable factors Next, a firm within
this sector was selected, which was characterised by
high demand uncertainty Additional criteria were
product variety and practical reasons, in particular the
firm’s willingness to cooperate and the authors’
familiarity in the domain
Data collection was done in semi-structured open
interviews with managers and employees of the case
company and additional desk research In total, 14
persons were interviewed in nine meetings (five
managers and nine employees) The division of roles
is as follows:
Management: sales (1), finance (1), logistics (1),
production (1), CEO (1)
Employees: order processing (1), planning (2),
expedition (1), ICT (1), production seedlings (2),
production cuttings (2)
The questionnaire comprises four main parts:
supply chain structure, business processes, control
and information management Every section includes
open questions both for mapping and evaluation (see
Appendix 1) Three in-depth interviews were held
covering the complete questionnaire The subsequent
interviews focused on specific business processes and
were combined with observation of the company’s
operations and systems
The research was organised as follows First, the
dependence between product and process
configura-tion was defined in a typology of interdependencies
based on literature review Second, the case-study firm
was investigated in interviews and additional desk
research Next, the investigation results were matched
with the developed theoretical framework to define the
basic design requirements The researchers then
designed a conceptual information architecture for
combined support of both product and process
configuration The designed architecture was tested in
a proof of feasibility implementation at Sofon, a Dutch
configurator vendor, and evaluated by the
manage-ment of both the case-study firm and Sofon Finally,
general development strategies were abstracted fromthe case findings based on the developed theoreticalframework
3 Configuration in the Dutch flower industryThis section introduces the case firm and its need forproduct and process configuration
3.1 Dutch flower industryThe Dutch flower industry is traditionally a strong andinnovative sector with a leading international com-petitive position and a great impact on the nationaleconomy It is internationally renowned as a strongcluster (Porter 1998) that produces cut flowers andpotted plants, mainly in greenhouses In particular,production of potted plants has many similarities withmanufacturing It is also a form of discrete production,
in which products are assembled from plants, pots, decorations, labels and packaging The creation
flower-of potted plants also has some features flower-of continuousproduction, because of the process of continuousgrowth, but potted plants remain discrete units,traceable at single product level
The extent to what processes are order-drivendiffers a lot, not only among different companies butalso within firms For the spot market, products aremade to stock and distribution is either to order(usually via traders) or anticipatory (usually viaauctions) For other cases, plants are often produced
to forecast, while assembling, labelling and packagingare order-driven
The flower industry is characterised by highuncertainty of both demand and supply Supplyuncertainty is high, because chains are vulnerable toproduct decay, weather conditions, pests, traffic con-gestion and other uncontrollable factors Further,demand uncertainty is also high because of weather-dependent sales, changing consumer behaviour andincreasing global competition amongst other things.This results in high variability of supply capabilitiesand demand requirements in terms of volume, time,service levels, quality and other product characteristics
3.2 Case company profileThe case company is a global supplier of a wide range
of young potted plants It is a rapidly growingcompany, with 350 staff and with production locations
in Holland, Brazil, Kenya, Israel and Zimbabwe.Annually, over 100 million young plants are delivered
as input material to growers or wholesalers
The firm is characterised by high product variety Itproduces about 800 varieties in six main categories,International Journal of Computer Integrated Manufacturing 517
Trang 33including begonia and cyclamen In addition, over 400
varieties are sourced from other producers to offer a
complete assortment Varieties differ, amongst other
things in colour, shape and growing characteristics
The firm propagates young plants in two basic ways: as
seedlings or cuttings Seedlings can be sold at different
stages of the growing process Cuttings can be sold
rooted or unrooted and in different sizes All young
plants can be delivered in different types of trays
Furthermore, delivery conditions vary For example,
due to product-inherent characteristics, some varieties
can only be delivered in specific periods and quality
and prices are often time-dependent In addition,
royalties differ per variety and per continent
Also process variety is high Production differs
between seedlings and cuttings For seedlings, seeds are
sourced from breeders, seeded in trays and budded
Budded seeds can be sold directly or grown further
Seedlings are mostly seeded to customer order, but also
produced to forecast or sourced to order (especially for
specific variety mixtures) Cuttings are mostly
pro-duced by the firm, but are also sourced from third
parties Production of cuttings starts with propagation
and growing of parent plants, which is done in
southern countries for reasons of climate and labour
costs After almost 2 years, cuttings can be harvested
They are shipped directly to customers or transported
to Holland for rooting Unrooted cuttings can be
stored for 10 days at the most, including 3 days for
transportation The company strives for order-driven
harvesting and rooting of cuttings, but production to
stock also occurs Furthermore, logistics are complex,
due to the global distribution of both production
locations and customers, combined with high
require-ments concerning delivery lead-times and flexibility
3.3 Need for combined product and process
configuration
The interviews indicated that the case company is
characterised by high uncertainty of both demand and
supply Demand requirements (about product features,
quality and service levels) are diverse and difficult to
predict Also, predictability of the demand amount and
time is low, although basic seasonable patterns can be
determined Moreover, the lead-times, yields and
qualities of production very much depend on the
growth of living materials
The company deals with this high uncertainty by
providing variety in their product assortment and
flexibility in meeting customer demands with regard to
product specifications and delivery schedules To date,
it has relied heavily on improvisation by experienced
employees However, since the company is growing,
they face problems in keeping this manageable, which
set limits to further growth As a consequence, theinterviewees in particular stressed the lack of tools forcustomer requirement definition based on real-timeinformation of the supply capabilities, as well asflexible back-office systems for (re)planning, (re)scheduling and monitoring of order fulfilment Theaddressed most urgent bottlenecks are:
Knowledge of production processes and options
to reconfigure these processes is only implicitlyavailable in the minds of some experienced staffmembers This problem is manageable with thefirm’s current scale, but inhibits further growth Information systems are fragmented and poorlyintegrated They require a lot of manual data re-entry Information inconsistency leads to largersafety buffers than strictly required and manyredundant data checks and duplicate registra-tions are performed
Mid-term planning is not coordinated withoperational data, due to a lack of systemintegration
The company’s management assessed existing ERPsystems for solving these problems, but evaluated them
to lack the required flexibility Therefore, the firmdecided to consider implementation of configurationsoftware for products and processes, in combinationwith an ERP system, as a possible option to masteruncertainty
4 Role of configurators in supply chain managementThis section provides some conceptual backgroundabout the use of configurators and defines thedependence between product and process configura-tion in a typology of interdependencies
4.1 Product configurators in responsive supply chainsConfigurators have emerged from the development ofrule-based product design in the field of artificialintelligence A well-known early application was R1, aproduct configurator for VAX computers (McDermott1981) A product configurator is a tool that guidesusers interactively through specification of customer-specific products (Sabin and Weigel 1998, Forza andSalvador 2002, Tseng and Chen 2006, Reinhart et al.2009) Configurators generate specific product variants
by combining sets of predefined components andspecifying features according to permitted values.Next, they check the completeness and consistency ofconfigured products based on rules that define theinterdependencies between components or features.Product configurators are based on generic product
Trang 34models, which define the class of objects that can be
configured (Hegge and Wortmann 1991)
Currently, configurators play an important role in
responsive supply chains, which are characterised by
high demand uncertainty and low supply uncertainty
(Lee 2002) They are widely used for product
config-uration to enable rapid response to customer demands
In interaction with the user, the software generates
consistent and complete specifications of customised
products, taking into account both the customer’s
requirements (e.g functional specifications and
deliv-ery conditions) and feasibility of production, sourcing
and delivery Together with the product specification,
current configurators can produce commercial offers
and draft contracts and schedules and contracts for
support and maintenance of the product The software
can be designed for use either by a sales representative
of the supplier or by a customer, e.g through the
Internet In both cases, the configuration process
results in a quick and effective order specification
that can directly be entered into the production
planning and scheduling systems
4.2 Configuration in agile supply chains
Next to demand uncertainty, agile supply chains are
also characterised by high uncertainties at the
supply-side (Lee 2002) High supply uncertainty makes great
demands on the flexibility of supporting information
systems The development of modular software
ap-proaches especially has been advocated for realising
this flexibility (for example, Verwijmeren 2004) SOA is
the latest development in software modularity (Wolfert
et al 2010) In a SOA approach, business process
models are leading in routing event data amongst
multiple application components that are packaged as
autonomous, platform-independent services (Erl 2005,
Papazoglou et al 2007) Consequently, new or adapted
business processes can be supported without changing
the underlying software Induced by the emergence of
SOA, ERP vendors have also begun to modularise
their software (Møller 2005, Loh et al 2006)
The leading role of business processes in modular
software approaches puts emphasis on rapid
config-uration of processes in achieving flexibility The
con-cept of process configuration is introduced by
Schierholt (2001), who applied the principles of
product configuration to support process planning
Rupprecht et al (2001) and Zhou and Chen (2008)
described approaches for automatic configuration of
business process models for specific projects Jiao et al
(2004) formalised the modelling of process
configura-tions for given product configuraconfigura-tions Verdouw et al
(2010) argue that reference process models should be
set up as dynamic configurable models to enable
ICT mass customisation and they assess the readiness
of existing models Furthermore, the ERP vendor SAPhas addressed process configuration to manage thecomplexity of their reference process models that areused as a basis for system implementation Theyconducted extensive research to make these modelsconfigurable (Dreiling et al 2006, Rosemann andvan der Aalst 2007) Building upon this, Rosa et al.(2007) proposed a questionnaire-driven approach toguide users interactively through process modelconfiguration
Nevertheless, the majority of existing literaturefocuses either on product or on process configuration.The mutual dependence between product and processconfiguration is relatively under-researched (cf Jiao
et al 2007, Chandra and Grabis 2009) The papers,found in the literature review, all focus on the definitiondomain, i.e translation of customer requirements to anintegrated design of products and manufacturingprocesses Jiao et al (2000) put forward an integratedproduct and process model that unifies bill-of-materialsand routings, called generic bill-of-materials-and-op-erations Jiao et al (2005) proposed a product-processvariety grid to unify product data and routing infor-mation de Lit et al (2003) introduced an integratedapproach for product family and assembly systemdesign Bley and Zenner (2006) developed an approach
to integrate product design and assembly planning
As argued before, the presence of supply tainty also results in a high mutual dependence afterthe definition phase During configuration and execu-tion, the effects of unforeseen backend events on thedefined product and fulfilment processes must con-tinuously be evaluated, based on the actual state of therequired resources However, no research is found thatexplicitly considers the interdependences during defini-tion, configuration and execution and that developsthe corresponding information architecture for co-ordination of this interdependence using configurators.Therefore, the next section first develops a typology ofproduct and process interdependences based onorganisational literature
uncer-4.3 Typology of interdependences between productand process configuration
Dependence is a central notion of the General SystemsTheory This theory argues that the whole of a system ismore than its parts, because of the existence ofdependencies between their elements (Bertalanffy1950) Thompson (1967) was one of the first to applythis idea to organisational theory He distinguishedthree basic types of dependency, pooled, sequential andreciprocal interdependence, which require differentcoordination modes: coordination by standardisation,International Journal of Computer Integrated Manufacturing 519
Trang 35by plan and by mutual adjustment His work is refined
by many others, all focusing on coordination of generic
dependencies between organisational subunits Malone
and Crowston (1994) have introduced different types of
dependencies between activities and resources They
distinguish between flow, sharing and fit dependencies
(see also Malone et al 1999) Flow dependencies arise
whenever one activity produces a resource that is used
by another activity (precedence relation) Sharing
dependencies occur whenever multiple activities all
use the same resource Fit dependencies arise when
multiple activities collectively produce a single
resource
If these interdependencies are applied to product
and process configuration, distinction should be made
between different decision levels, i.e definition,
config-uration and execution First, in the definition phase
designers predefine reference product and process
models These are generic models, or family models,
which define the possible product and process
compo-nents and which include rules that define the possible
combinations of components A product reference
model is constrained by the available business processes
as defined in process reference models Conversely, a
process reference model must contain the business
processes that produce the variety of products as
defined in product reference models Second, the
configuration phase starts when a customer order
request comes in A customised product is configured
in interaction with the customer, taking into account
whether the enabling business process can be
config-ured Therefore, the required input products and
capacity must be available to promise The result is
an accepted order, which triggers configuration of the
business processes that fulfil the order These might
include distribution activities (make to stock), and
production activities (assemble/make to order) and
engineering activities (engineering to order) Finally,
the execution phase comprises planning, scheduling
and completion of the configured business processes.The progress is monitored continuously and, ifnecessary, the product and process configurations areupdated
Figure 1 more precisely defines the interdependenceamong product and process configuration in atypology of dependencies This typology is an applica-tion of the categorisation of Malone and Crowston(1994) and Malone et al (1999) as discussed above.Product configurators are primarily means forcoordination of fit dependencies: assembling consistentproduct variants that meet specific customer require-ments from available components and options Analo-gously, process configuration coordinates the assembly
of consistent process variants from available activities
or services The alignment of product and processconfiguration requires coordination of precedence(flow) dependencies: process configuration is condi-tional for product configuration and vice versa.Furthermore, process configuration depends on opera-tional execution because fulfilment of the configuredprocess needs capacity and input products Morespecific, the defined interdependencies are:
(1) Product assembling: multiple product modulesare required to produce a single product (fitdependency) Product configurators are pri-marily means for coordination of this depen-dency They specify components, options,interfaces and interdependency rules in refer-ence product models and guide customer-specific configuration of product variants.(2) Process rules precedence: process properties setconstraints to possible product configurations.Consequently, process reference models are aprecondition for product reference models(flow dependency) This dependency is mostlycoordinated by mutual adjustment of productand process models by designers, ideally
Trang 36supported by tools that ensure consistency of
both model types
(3) Order precedence: specific product
configura-tions (order information) are input for
config-uration of specific fulfilment processes (flow
dependency) Therefore, order information
must be interpretable by back-office systems
for production and distribution This
depen-dency can be coordinated by standardisation of
order data in an executable form, including
bill-of-materials
(4) Process assembling: multiple activities, i.e
process modules, are required to compose a
single process (fit dependency) Process
config-uration is primarily a mechanism to coordinate
this dependency It specifies the activities,
interfaces and interdependency rules in
refer-ence process models and guides configuration
of order-specific processes
(5) Process precedence: the output of the process
configuration task is conditional for the
plan-ning and scheduling of the fulfilment (flow
dependency) Execution of a configured process
consumes input products (raw material or
semi-finished products) and uses capacity This
dependency can be coordinated by
standardisa-tion of configured processes in a model format
that is interpretable by planning and scheduling
systems
(6) Product precedence: for execution of a
fulfil-ment process, the required input products must
be available (flow dependency) This
depen-dency can be coordinated by integration with
planning and scheduling mechanisms
(7) Capacity precedence: for order-driven
pro-cesses, the required capacity must be available
(flow dependency) This can be coordinated by
integration with planning and scheduling
mechanisms
(8) Capacity rules precedence: the characteristics of
used capacity (e.g machine set-up, other facility
layouts and human resource competences) set
constraints for the possibilities for process
configuration (flow dependency) This can be
coordinated similarly to process rules
depen-dencies: mutual adjustment of capacity layouts
and process models by designers ideally
sup-ported by tools that ensure model consistency
The last dependencies to be mentioned are related
to the operational execution of configured processes:
(1) Product consumption: multiple configured
pro-cesses all use the same input products (sharing
dependency)
(2) Capacity usage: configured processes for ple orders all use the same capacity (sharingdependency)
multi-(3) Capacity assembling: multiple capacity unitsare required to set up specific layouts (fitdependency)
These last three dependencies are coordinated byplanning and scheduling systems They do not directlyimpact product and process configuration (only viaproduct, capacity and process precedences) and arethus beyond the scope of this paper
5 Information architecture for combined product andprocess configuration
This section describes a conceptual informationarchitecture for combined support of both productand process configuration, including a proof offeasibility implementation in a configurator
5.1 Basic design requirementsThe uncertainty of both demand and supply of the casecompany is high, as Section 3 demonstrates Demandrequirements (about product features, quality andservice levels) are diverse and difficult to predict Alsopredictability of the amount and time is low, althoughbasic seasonable patterns can be determined More-over, the lead-times, yields and qualities of productionvery much depend on uncontrollable factors
In order to make this variability manageable, thesolution to be designed must support coordination ofthe high interdependence between the company’sproducts and processes during:
definition: it must be possible to define integratedreference models, which cover the variety of thefirm’s products and enabling processes andwhich take into account the constraints arisingfrom its specific process characteristics;
configuration: it must be possible to configurecustomised products and the accompanyingprocesses, in interaction with the customer andtaking into account whether the required inputproducts and capacity are available to promise; execution: it must be possible to implement theconfigured business processes in the company’sbackend systems, to monitor its progress andupdate product and process configurations ifnecessary
More specifically, these basis requirements implythat the design must support coordination of thedependences as developed in previous section Table 1International Journal of Computer Integrated Manufacturing 521
Trang 37identifies these dependencies for the case company by
matching the investigation results with the defined
typology The remainder of this section develops a
corresponding information architecture, including a
proof of feasibility implementation in the configurator
Sofon
5.2 Information architecture for product configuration
Sofon Guided Selling is a model-based product
configurator (Sofon B.V., Son, The Netherlands) It
provides functionality for the definition of
question-naires that guide users interactively through
require-ment specification and translates this information
to product configurations in the form of
bills-of-materials, quotation calculations, visualisations and
document generation Most users utilise Sofon as a
front-office system, in combination with an ERP
system for the back-office Figure 2 illustrates theunderlying information architecture
The focus is on coordination of product assemblingdependencies Therefore, functionality is provided tospecify the product range, possible features and rulesthat define permitted selections in reference productmodels Additionally, other order specifications such
as delivery dates can be defined here Product expertscan enter configuration rules into the configurator’srepository Product data (bill-of-materials, part num-bers, prices) and process data (routing, lead times,production cost) can be copied from ERP master data,
to ensure that production orders will be in terms thatcan be interpreted by ERP systems (process andcapacity rules precedence)
Questionnaires are then generated that guideconfiguration, either directly by the customer orthrough a sales representative The configured product
2) Process rules
precedence
processes (about 14 weeks for seedlings, 5–6 weeks for rooted cuttings, 10 days for unrootedcuttings)
road, rail or air freight, and different requirement to shipping documentation)
to be reserved
the cuttings order fulfilment
cuttings8) Capacity rules
precedence
cuttings9) Product
consumption
that can be budded synchronously, consequently capacity shortage for an urgent order mightresult in rescheduling another order
Trang 38and other customer specifications (orders, bill of
material) are generated in a format that can be
executed by ERP systems (order precedence) Also,
basic order-specific routings can be generated, which
serve as a basis for planning and scheduling (process
precedence)
For the case firm, the reference product model
includes product categories (including begonia and
cyclamen), specific varieties and product features, such
as budded seeds or grown up, cutting size, rooted or
unrooted, possible tray types, delivery conditions and
royalty types Figure 3 presents a simplified example in
Sofon
The figure shows that the generic model is defined
in two ways The main part is the definition of like questionnaires in the language of customers Thegeneric questionnaire is defined to the left of the screenand possible answers are shown to the top-right of thescreen At the bottom, the product model is specified as
wizard-a generic bill-of-mwizard-ateriwizard-als thwizard-at is executwizard-able by ERPsystems During configuration, selections made in thequestions are specified automatically into this bill-of-materials For example, based on the selection of thecolour red, the variety ‘begonia elatior baladin’ isdefined (see Figure 3: article code BE72)
5.3 Information architecture for combined productand process configuration
Currently, configurators such as Sofon focus onproduct configuration in the responsive segment Agilesupply chains require combined product and processconfiguration Two essential differences can be dis-tinguished: i) introduction of process configurationbetween product configurators and planning andscheduling systems; ii) dynamic alignment of resultinginterdependencies In the case study, Sofon was used todevelop an information architecture for this and toevaluate the feasibility of configurators Figure 4shows the resulting conceptual model
Analogous to product configuration, the focus ofprocess configuration is on the coordination of process
International Journal of Computer Integrated Manufacturing 523
Trang 39assembling dependencies, i.e to assemble specific order
fulfilment processes from multiple activities (process
modules) Therefore, standard process models can be
specified and the composition of customer-specific
processes can be guided by configurator tools
How-ever, the important difference with product
configura-tion is that most informaconfigura-tion required for process
configuration is available in the system Two important
information sources can be distinguished for process
configuration: customer orders (output of product
configuration) and availability of required input
products and capacity (output of ERP back-office
system) Neither of these types of information needs to
be specified manually during process configuration
Although Sofon focuses on companies in theresponsive segment that do not face high supplyuncertainties, the tool can be applied to configureprocesses in the same way that it is used to configureproducts Figure 5 presents an example for the casecompany using Sofon’s existing functionality
It shows that there are three additional questionsfor the configuration of the young plant order ful-filment processes, all of which are answered auto-matically The questions concerning capacity andproduct availability are queries to ERP back-officesystems The question ‘how far order-driven?’ isanswered by an automatic calculation using theretrieved data about product and capacity availabilityand information about the required vs possible lead-time The required delivery lead-time is as specifiedduring product configuration The possible lead-time isthe sum of all order-driven fulfilment processes Thecalculation result is input for activity specification inthe generic routing (i.e bill of activities) that isexecutable by ERP systems (see right-bottom ofFigure 5)
Consider, for example, an illustrative order forcyclamen The customer specification, resulting fromproduct configuration, shows that this is an order for
2000 budded ‘cyclamen miniwella twinkle blanc’ to bedelivered within 4 days in 66–66–44 trays to Hamburg,Germany Operational ERP data show that these are
in stock and that distribution will take 2 days Thus,
Trang 40only distribution activities are on customer order and
these activities are selected for this order (see Figure 5:
D2.5 and further)
Consider another simplified example: an order for
begonia cuttings The configured order shows that this
is an order for 5000 rooted ‘begonia eliator baladin ‘to
be delivered in 7 weeks in 72–72–44 trays to Latina,
Italy Operational ERP data show that the required
cuttings are available at parent plants in Brazil and
that the required air freight and greenhouse capacity is
also available The lead-time of rooting cuttings from
these parent plants is 5 weeks The total lead-time from
harvesting until delivery at the customer site is 6 weeks
and 3 days This is less than 7 weeks, so all activities
from harvesting onwards can be on customer order
Consequently, all activities for production of cuttings
and for distribution are selected in the generic routing
(see bottom-right of Figure 5)
5.4 Configurator development strategies
The previous analysis shows that product
configura-tors can also be used to support process configuration
Below, it is evaluated more precisely to what extent the
identified basic requirements can be met by existing
configurators by discussing how coordination of the
defined interdependencies is supported (see Figure 1):
(1) Product assembling is well supported, since this
is the traditional focus of configurators For
example, Sofon provides rich functionality for
defining generic product models in wizard-like
questionnaires and accompanying rules, and
generic bill-of-materials
(2) Process rules precedence requires solid
integra-tions with back-office systems and mechanisms
to prevent redundant process logic or to ensure
consistency For example, Sofon provides
functionality to copy master data from ERP
packages, but alignment has to be done
manually by product experts; consistency
checks are not supported
(3) Order precedence: configurators and ERP
systems must be technically integrated and
order-related data must be defined in a format
that is executable by back-office systems
Especially in agile supply chains, functionality
is required for reconfiguration of order-related
data if changes in the back-office occur For
example, Sofon contains rich functionality for
defining standard orders and accompanying
bill-of-materials and it provides standard
ap-plication connectors for ERP packages
How-ever, reconfiguration of adjusted requirements
after contract conclusion is not supported
(4) Process assembling: this could be supported
by applying available product configurationfunctionality to processes However, adequateprocess configuration requires rich function-ality to specify reference process models and
to configure business process models based onconfigured orders and operational back-officedata In existing questionnaire-based productconfigurators, this functionality might berather basic For example, in Sofon, genericroutings for customer-specific processes can beconfigured, but this functionality just listsactivities, possibly including fixed lead-times
It does not specify possible interactions andsequences among activities and it is notpossible to derive activity lead-times fromoperational data
(5) Process precedence: configured processesshould be executable in back-office systems toorchestrate order-specific fulfilment, includingproduct reconfigurations For compatibilitywith different software environments, processmodels should be configured in XML-standardnotations (i.e BPMN and BPEL) that areexecutable in any SOA-compliant back-officesystem or integration platform For example, inSofon order-specific routings can be configuredand executed by planning and schedulingsystems However, as argued at the previousdependency, this is rather basic decompositioninformation Sofon does not yet support theconfiguration of BPMN and BPEL processmodels
(6) Product precedence: in product configurators,product availability data can be incorporatedinto configuration, e.g to determine the avail-ability to promise This functionality could also
be used for process configuration For example,
in Sofon it is possible to define questions thatretrieve data automatically from back-officesystems
(7) Capacity precedence: particularly in the case oforder-driven production, capacity availabilitydata are also needed This can be retrieved inthe same way as product availability data.(8) Capacity rules precedence: in this case, requiredfunctionality is similar to process rules pre-cedence; solid integration with back-office andmechanisms to prevent redundant capacitylogic or to ensure consistency
(9) Product consumption, (10) Capacity usage and(11) Capacity assembling: these dependenciesare supported by back-office systems for plan-ning and scheduling and are beyond the scope
of configurators
International Journal of Computer Integrated Manufacturing 525