Free-Decay Identi fi cation Procedure

Một phần của tài liệu Mechatronics and robotics engineering for advanced and intelligent manufacturing (Trang 123 - 132)

This identification procedure uses the swing-free mounting mechanism described above. The vehicle is attached to the end of the mechanism, submerged in the water and the free decay motion is performed. The structure of experiment is depicted in Fig.4. This experiment can be described by differential equation

madþmb

ð ịl2ỵJỵJs

u€ỵðlbtrỵbrotỵbsịu_ỵcrotu_j ju_

ẳððmwmbịlmsrịg sinð ịu ð24ị where the parameters agree with (1) and (19).

Table 1 Estimated parameters for translational and rotational motion Axis Translational motion Rotational motion

mad(kg) βtr(kg m/s) J(kg m2) βrot(kg m2/s) γrot(kg m2/rad)

e1 70 46 1.3 2.2 4.64

e2 80 47 2 2.2 4.75

e3 163 66 6.7 3 5.5

116 M. Langmajer and L. Bláha

For identification purposes Eq. (24) should be due to low angular velocity (see Sect.3.4) approximated to

madþmb

ð ịl2ỵJỵJs

uỵðlbtrỵbrotỵbsịu_ ẳððmwmbịlmsrịg sinð ịu ð25ị Due to the structure of formula (25) there is no chance to identify more than two base inertial parameters. But the number of remaining estimated parameters is exactly two, namelymad;btr. Base Inertial parameters to estimate are

^h1ẳðmadỵmbịl2ỵJỵJs

; ^h2ẳðlbtrỵbrotỵbsị: ð26ị Observation matrix is in form

Aẳ

u€1 u_1

u€2 u_2

... ...

u€n u_n

2 66 64

3 77

75; ð27ị

Fig. 4 Free-decay structure

Structural Parameter Identification of a Small Robotic Underwater Vehicle 117

and the corresponding estimated parameters by least-square method are expressed from base inertial parameters (26). For axise1we get

mad ẳ70ðkgị; btrẳ46ðkg m=sị: ð28ị Other estimated parameters are arranged in Table1.

5 Results of Identi fi cation

This section summarizes the results of parameter identification. All the estimated parameters were checked using a simulating model in Matlab/SimMechanics, where the model response is compared to measured data. The comparison of measured and simulated responses is depicted in Figs.5and7. The Fig. 6. depicts the record of thruster input and forced oscillation of vehicle in each axis. The

Fig. 5 Response of free-decay structure and corresponding model with estimated parameters

118 M. Langmajer and L. Bláha

estimated parameters are arranged in Table1. Table2 adds a summary of remaining auxiliary parameters.

6 Conclusion

The paper describes an identification procedure for estimating the base inertial parameters of a robotic underwater vehicle. The mathematical model was defined using Kirchhoff’s equations with external forces representing the hydrodynamic drags in standard simplification. The meaning of the base inertial parameters is explained. Three methods of identification are described, together with the problem of numerical differentiation. The identification procedure uses two experiments for identification of rotational and translational parameters of given model. The experiments are designed with respect to unknown parameters of the model and Fig. 6 Sinusoidal input and vehicle response for each axis during oscillatory forced motion experiment

Structural Parameter Identification of a Small Robotic Underwater Vehicle 119

according to local conditions. The estimated parameters are validated using simu- lations, where the measured responses are compared with real measured data. The parametric model was successfully used for design of stable motion controller for a real prototype, which proves that the model is rich enough to be relevant yet simple to be useful for control design.

Acknowledgments This work was supported by the grant TA02020414 of Technological Agency of the Czech Republic and by the project LO1506 of the Czech Ministry of Education, Youth and Sports.

Fig. 7 Vehicle response for each axis (Fig.6) and corresponding outputs of model with estimated parameters

Table 2 Auxiliary measured parameters

Mounting mechanism Vehicle body/thrusters

ms (kg)

Js (kg m2)

βs

(kg m2/s)

r(m) l(m) mb (kg)

mw (kg)

Ctf (hor/vert)

Ctb (hor/vert)

7.88 3.13 0.31 0.5 1.06 40.9 40.9 3.28/6.12 2.22/1.28

120 M. Langmajer and L. Bláha

References

Brockwell, P. J., Dahlhaus, R., & Trindade, A. A. (2005). Modified Burg algorithms for multivariate subset autoregression.Statistica Sinica, 15, 197–213.

Chen, Y. (2007). Modular modeling and control for autonomous underwater vehicle (AUV) (Thesis of Master of Engineering). Singapore.

Christ, R. D., & Wernli, R. L. Sr. (2007).The ROV manual: A user guide for observation-class remotely operated vehicles.

Coleman, T. F., & Li, Y. (1996). An interior, trust region approach for nonlinear minimization subject to Bounds.SIAM Journal on Optimization,6.

Dennis, J. E. Jr. (1977). Nonlinear least-squares. In D. Jacobs (Ed.),State of the art in numerical analysis.Academic Press.

Eng, Y. H. (2008). Estimation of the hydrodynamics coefficients of an ROV using free decay pendulum motion.Engineering Letters, 16(3), 326–331.

Fossen, T. I. (1994).Guidance and control of ocean vehicles. Chichester, England: Wiley.

Holoborodko, P. (2008).Smooth noise-robust differentiators. At http://www.holoborodko.com/

pavel/numerical-methods/numerical-derivative/smooth-low-noise-differentiators/

Inzartsev, A. V. (2009).Underwater vehicles. Austria: In-Tech.

Khalil, W., & Dombre, E. (2002).Modeling, identification and control of robots(p. 500). London:

Kogan Page Science ed.

Lamb, H. (1932).Hydrodynamics(6th ed.). New York: Dover.

Ovalle, D. M., García, J., & Periago, F. (2011). Analysis and numerical simulation of a nonlinear mathematical model for testing the maneuverability capabilities of a submarine.Nonlinear Analysis: Real World Applications, 12(3), 1654–1669.

Wang, W. (2007).Autonomous control of a differential thrust micro rov.

Yoerger, D. R., Cooke, J. G., & Slotine, J. J. E. (1990). The influence of thruster dynamic on underwater vehicle behavior and their incorporation into control system design.IEEE Journal of Oceanic Engineering, 15, 167–168.

Structural Parameter Identification of a Small Robotic Underwater Vehicle 121

Using Online Modelled Spatial Constraints for Pose Estimation in an Industrial Setting

Kenneth Korsgaard Meyer, Adam Wolniakowski,

Frederik Hagelskjổr, Lilita Kiforenko, Anders Glent Buch, Norbert Krỹger, Jimmy Jứrgensen and Leon Bodenhagen

Abstract We introduce a vision system that is able to on-line learn spatial con- straints to improve pose estimation in terms of correct recognition as well as computational speed. By making use of a simulated industrial robot system per- forming various pick and place tasks, we show the effect of model building when making use of visual knowledge in terms of visually extracted pose hypotheses as well as action knowledge in terms of pose hypotheses verified by action execution.

We show that the use of action knowledge significantly improves the pose esti- mation process.

Keywords Pose estimation Online modellingPick and place Stable pose

1 Introduction

Reliable pose estimation is still a challenge, in particularly in unconstrained envi- ronments. Finding the correct 6-DOF position of an object towards the camera. The detection method for this will be elaborated in Sect.3.3. Pose estimation often requires adaptation to different lighting conditions as well as parameter tuning and camera positions. This has multiple reasons: Firstly, even with the most advanced recording systems it is often difficult to generate reliable point clouds in an industrial context where objects are shiny and illumination can only be controlled to a certain degree. Secondly, even if reliable point clouds can be computed, the information provided by one view of a 3D sensor might be insufficient to reliably

K.K. MeyerA. WolniakowskiF. Hagelskjổr (&)L. Kiforenko A.G. BuchN. KrỹgerJ. JứrgensenL. Bodenhagen

Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Odense, Denmark e-mail: frhag@mmmi.sdu.dk

A.G. Buch

e-mail: anbu@mmmi.sdu.dk N. Krüger

e-mail: norbert@mmmi.sdu.dk

©Springer International Publishing Switzerland 2017

D. Zhang and B. Wei (eds.),Mechatronics and Robotics Engineering for Advanced and Intelligent Manufacturing, Lecture Notes

in Mechanical Engineering, DOI 10.1007/978-3-319-33581-0_10

123

compute poses due to the lack of shape structure. Humans however perform pose estimation with high reliability even in difficult contexts based on 2D views. One reason for that is that they can make use of context knowledge: humans are able to improve their performance on tasks by learning appropriate context models after very few trials.

In this work, we exploit spatial constraints to improve pose estimation perfor- mance, which cover knowledge of poses that can be expected, so called “stable poses”, as well as knowledge about the spatial arrangement of items on a conveyor belt or a table. Figure1 shows a typical situation in which some correct and some wrong pose hypotheses are generated based on the available visual information. By applying knowledge about what poses are actually physically possible and how objects are usually arranged, the wrong hypotheses can be eliminated. In this paper, we show how a model that represents such constraints can be learned on-line in a pose estimation system that is integrated in a complex assembly process.

Once enough data is generated to initialize a suitable model, this model can

“kick in”and reduce the search problem by disregarding all pose hypotheses that are found unlikely to be correct according to the model leading to better perfor- mance and faster processing time.

We realize two different ways of building models for such constraints: First, we exploit visual knowledge only, i.e., we our model purely from visually extracted pose hypotheses. Here the problem is, however, that many wrong visual hypotheses can lead to wrong model predictions which usually results in drop in performance.

As an alternative, we realize modelling based on the already performed actions.

Since we can evaluate the success of grasps, we can constrain the model building to pose hypotheses that led to successful action only. We show that by this modelling, we are able to more consistently reduce computational time and increase performance.

The paper is structured as follows; in Sect.2an overview of the method com- pared to the state of the art is given. Section3elaborates methods used in the article

Fig. 1 Result of the pose estimation algorithm. A large number of both correct and incorrect poses are found.aScene image with three rotor caps on the conveyor belt.bFound poses in the point cloud, the lighter the color of the rotorcap is the less penalty the estimate has

124 K.K. Meyer et al.

along with a description of the simulation environment. Section4 describes the online modelling used in the article. The results are given in Sect.5. Lastly Sect.6 contains a conclusion based on the test results of different methods.

2 State of the Art

The use of spatial constraints during 3D pose estimation is relatively rare. Most state of the art methods use more or less complicated hypothesis verification modules for eliminating bad poses. Some noteworthy examples hereof are the recognition systems based on Spin Images (Johnson and Herbert 1999), Tensor Matching (Mian et al.2006) and more recently the RoPS features (Guo et al.2013).

Common to all these systems, and many other 3D recognition pipelines, is that they rely on individual verifications of pose hypothesis, which are generated from e.g.

Hough voting or RANSAC (Fischler and Bolles1981) pipelines.

Instead, some works have used contextual information during pose hypothesis verification to arrive at a better scene interpretation. In (Papazov and Burschka 2010), a RANSAC scheme is used for gathering pose hypothesis, based on matches from point pair features, which were also successfully used in (Drost et al.2010).

Then, a conflict graph is constructed using the set of pose candidates, and conflicting hypotheses are removed by non-maximum suppression. In Aldoma et al.

(2012) a global hypothesis verification algorithm was used. In this work, a set of hypothesis poses is assigned a global cost based on a variety of cues, e.g. occlusions and overlaps. A random subset of hypotheses is then evaluated using this cost function, and a simulated annealing process is used for minimizing the global cost function for selecting the set of hypothesis that best describes the scene in a global manner.

This work builds upon the prior work of Jorgensen et al. (2015), which deals with stable poses of an object. We use such stable poses in our constraint frame- work in an active manner. Additionally, we compare the use of both vision and action information, and we use an online modeling scheme instead of requiring a sophisticated and error-prone prior modeling scheme.

3 Methods

In this section we will describe the types of constraints that we use for the point cloud segmentation, and methods that were used to construct their models. Wefirst introduce the concept of astable posein Sect.3.1, which allows us to classify and filter the detected samples by their orientation. Then, we arrive at spatial constraints constructed from the filtered samples: the point, line, and plane constraints in Sect.3.2. All of these spatial constraint models, as well as the stable pose model, are learned online using the RANSAC algorithm. The threshold valuesεon thefit Using Online Modelled Spatial Constraints for Pose Estimation… 125

error function E specific for each of the models are used to determine when a detection is part of a model. In Sect.3.3, we then briefly describe the simulation framework used in our experiments.

Một phần của tài liệu Mechatronics and robotics engineering for advanced and intelligent manufacturing (Trang 123 - 132)

Tải bản đầy đủ (PDF)

(457 trang)