embedded control architecture for autonomous pipeline and cable tracking, OCEANS 2003, Proceedings, Vol.. Vision based autonomous underwater vehicle navigation: underwater cable tracking
Trang 1An Active Contour and Kalman Filter for Underwater Target Tracking and Navigation 391
Kalman State Estimation
Fig 11 Comparison of actual and predicted and updated position of underwater pipeline using the Kalman tracking algorithm
9 References
Whitcomb, L.L (2000) Underwater robotics: out of the research laboratory and into the
field, IEEE International Conference on Robotics and Automation, ICRA '00, Vol.1, 24-28
April 2000, pp 709 – 716
Asakawa, K., Kojima, J., Kato, Y., Matsumoto, S and Kato, N (2000) Autonomous
underwater vehicle AQUA EXPLORER 2 for inspection of underwater cables,
Proceedings of the 2000 International Symposium on Underwater Technology, 2000, UT
00 23-26 May 2000, pp 242 – 247
Ortiz, A., Simo, M., Oliver, G (2002) A vision system for an underwater cable tracker,
Machine vision and application 2002, Vol.13 (3), July 2002, pp 129-140
Griffiths, G and Birch, K (2000) Oceanographic surveys with a 50 hour endurance
autonomous underwater vehicle, Proceeding of the Offshore Technology Conference, May 2000, Houston, TX
Asif, M and Arshad, M.R (2006) Visual tracking system for underwater pipeline inspection
and maintenance application, First International Conference on Underwater System
Technology, USYS06 18 – 20 July 2006, pp 70-75
Cowls, S and Jordan, S (2002) The enhancement and verification of a pulse induction based
buried pipe and cable survey system Oceans '02 MTS/IEEE Vol 1, 29-31 Oct 2002,
pp 508 – 511
Petillot, Y.R., Reed, S.R and Bell, J.M (2002) Real time AUV pipeline detection and tracking
using side scan sonar and multi-beam echo-sounder, Oceans '02 MTS/IEEE Vol 1,
29-31 Oct 2002, pp 217 - 222
Evans, J., Petillot, Y., Redmond, P., Wilson, M and Lane, D (2003) AUTOTRACKER: AUV
Trang 2embedded control architecture for autonomous pipeline and cable tracking,
OCEANS 2003, Proceedings, Vol 5, 22-26 Sept 2003, pp 2651 – 2658
Balasuriya, A & Ura, T (1999) Multi-sensor fusion for autonomous underwater cable
tracking, Riding the Crest into the 21st Century OCEANS '99 MTS/IEEE, Vol 1, 13-16
Sept 1999, pp 209 – 215
Foresti, G.L (2001) Visual inspection of sea bottom structures by an autonomous
underwater vehicle Systems, IEEE Transactions on Man and Cybernetics, Part B, Vol
31 (5), Oct 2001, pp 691 – 705
Matsumoto, S & Ito, Y (1997) Real-time vision-based tracking of submarine-cables for
AUV/ROV, MTS/IEEE Conference Proceedings of OCEANS '95, 'Challenges of Our
Changing Global Environment, Vol 3, 9-12 Oct 1995, pp 1997 – 2002
Balasuriya, B.A.A.P., Takai, M., Lam, W.C., Ura, T & Kuroda, Y (1997) Vision based
autonomous underwater vehicle navigation: underwater cable tracking MTS/IEEE
Conference Proceedings of OCEANS '97, Vol 2, 6-9 Oct 1997, pp 1418 – 1424
Zanoli, S.M & Zingretti, P (1998) Underwater imaging system to support ROV guidance,
IEEE Conference Proceedings of OCEANS '98, Vol 1, 28 Sept.-1 Oct 1998, pp 56 – 60 Perona, P & Malik, J (1990) Scale-space and edge detection using anisotropic diffusion,
IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 12(7), July 1990,
pp 629 – 639
Weickert, J (2001) Applications of nonlinear diffusion in image processing and computer
vision, Proceedings of Algoritmy 2000, Acta Math University Comenianae Vol LXX,
2001, pp 33 – 50
Blake, A & Isard, M (1998) Active Contour, Springer, Berlin, 1998.
Cootes, T., Cooper, D Taylor, C & Graham, J (1995) Active shape models – their training and
application, Computer Vision and Image Understanding, Vol 61(1), 1995, Pages 38 – 59
MacCormick, J (2000) Probabilistic modelling and stochastic algorithms for visual
localization and tracking Ph.D thesis, Department of Engineering Science,
University of Oxford 2000
Trang 3Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation
Kia Chua and Mohd Rizal Arshad
USM Robotics Research Group, Universiti Sains Malaysia
Malaysia
1 Introduction
Most of underwater pipeline tracing operations are performed by remote operated vehicles (ROVs) driven by human operators These tasks often require continued attention and knowledge/experience of human operators to maneuver the robot (Foresti G.L and Gentili.,2000) In these operations, human operators does not require an exact measurement from the visual feedback, but based on the reasoning
For these reasons, it is desirable to develop robotics vision system with the ability to mimic the human mind (human expert’s judgement of the terrain traverse) as a translation of human solution In this way, human operators can be reasonably confident that decisions made by the navigation system are reliable to ensure safety and mission completion To achieve such confidence, the system can be trained by expert (Howard A et al, 2001)
In order to enable robots to make autonomous decision that guide them through the most traversable regions of the terrain, fuzzy logic techniques can be developed for classifying traverse using computer vision-based reasoning Computing with words is highly recommended either when the available information is too imprecise to use numbers, or when there is a tolerance for imprecision which can be exploited to get tractability and a suitable interface with the real world (Zadeh L, 1999)
Current position based navigation techniques cannot be used in object tracking because the measurement of the position of the interested object is impossible due to its unknown behavior (Yang Fan and Balasuriya, A, 2000) The current methods available in realizing target tracking and navigation of an AUV, used optical, acoustic and laser sensors These methods have some problems, mainly, in terms of complicated processing requirement and hardware space limitation on AUVs (Yang Fan and Balasuriya, A, 2000) Other relevant research consists of neural-network terrain-based classifier in Foresti et.al (2000) and Foresti, G L and Gentili (2002) Also, existing method using Hough transform and Kalman filtering for image enhancement has also been very popular (Tascini, G et al, 1996), (Crovatot, D et al, 2000), (El-Hawary, F and Yuyang, Jing, 1993) , (Fairweather, A J R et al, 1997) and (El-Hawary, F and Yuyang, Jing, 1995)
2 Research Approach
Visible features of underwater structure enable humans to distinguish underwater pipeline from seabed, and to see individual parts of pipeline A machine vision and image processing
Trang 4system capable of extracting and classifying these features is used to initiate target tracking and navigation of an AUV
The aim of this research is to develop a novel robotics vision system at conceptual level, in order to assist AUV’s interpretation of underwater oceanic scenes for the purpose of object tracking and intelligent navigation Underwater images captured containing object of interest (Pipeline), simulated seabed, water and other unwanted noises Image processing techniques i.e morphological filtering, noise removal, edge detection, etc, are performed on the images in order to extract subjective uncertainties of the object of interest Subjective uncertainties became multiple input of a fuzzy inference system Fuzzy rules and membership function is determined in this project The fuzzy output is a crisp value of the direction for navigation or decision on the control action
2.1 Image processing operations
For this vision system, image analysis is conducted to extract high-level information for computer analysis and manipulation This high-level information is actually the morphological parameter for the input of a fuzzy inferences system (linguistic representation of terrain features)
When an RGB image is loaded, it is converted into gray scale image RGB image as shown in Fig 1 Gray-level thresholding is then performed to extract region of interest (ROI) from the background The intensity levels of the object of interest are identified The binary image B[i,j], is obtained using object of interest’s intensity values in the range of [T1, T2] for the original gray image F[i,j] That is,
¯
®
=0
1],
[ j i
otherwise
T j i F
At this stage, feature extraction is considered completed The object of interest is a pipeline laid along the perspective view of the camera The image is segmented into five segments and processed separately for terrain features as multiple steps of inputs for the fuzzy controller In order to investigate more closely each specific area within the image segment, each segment is further divided into six predefined sub segments in the image Each sub segment (as illustrated by Fig 2) is defined as follows
• Sub segment 1 = Upper left segment of the image
• Sub segment 2 = Upper right segment of the image
• Sub segment 3 = Lower left segment of the image
• Sub segment 4 = Lower right segment of the image
• Sub segment 5 = Upper segment of the image
• Sub segment 6 = Lower segment of the image
A mask image with constant intensity is then laid on the image as shown in Fig 3 This is actually an image addition process whereby it will produce a lighter (highest intensity value) area when intersects the region of interest The remaining region with highest
Trang 5Robotics Vision-Based Heuristic Reasoning for Underwater Target Tracking and Navigation 395
intensity value then be calculated its coverage area in the image as shown in Fig 4 The area,
A of the image is determined by
i
j i B A
1 1
],
Sub segment 5-6 are being determined its location relative to the image center Coverage
area and location of object of interest in each sub segment is finally be accumulated as
multiple input of the fuzzy inference system
Fig 1 Typical input image (RGB)
Fig 2 Show image sub segment
Fig 3 Mask on threshold, removed noise image
Trang 6Fig 4 Acquired area information
2.2 The fuzzy inference system
The fuzzy controller is designed to automate how a human expert, who is successful at this task, would control the system The multiple inputs to the controller are variables defining the state of the camera with respect to the pipeline, and the single output is the steering command set point Consider the situation illustrated by Fig 5 The fuzzy logic is used to interpret this heuristic in order to generate the steering command set point In this case, the set point of AUV has a certain amount (ƦX) to the right
Basically, a human operator does not require a crisp / accurate visual input for mission completion There are total of six inputs based on the image processing algorithm
• Input variable 1, x 1 = Pipeline area at upper left segment in the image
Input variable 1 fuzzy term set, T(x 1 ) = {Small, Medium, Large}
Input variable 1 universe of discourse, U(x 1) = [0.1 -1.0]
• Input variable 2, x 2 = Pipeline area at upper right segment in the image
Input variable 2 fuzzy term set, T(x 2 ) = {Small, Medium, Large}
Input variable 2 universe of discourse, U(x 2) = [0.1 -1.0]
• Input variable 3, x 3 = Pipeline area at lower left segment in the image
Input variable 3 fuzzy term set, T(x 3 ) = {Small, Medium, Large}
Input variable 3 universe of discourse, U(x 3) = [0.1 -1.0]
• Input variable 4, x 4 = Pipeline area at lower right segment in the image
Input variable 4 fuzzy term set, T(x 4 ) = {Small, Medium, Large}
Input variable 4 universe of discourse, U(x 4) = [0.1 -1.0]
• Input variable 5, x 5 = End point of pipeline relative to image center point
Input variable 5 fuzzy term set, T(x 5 ) = {Left, Center, Right}
Input variable 5 universe of discourse, U(x 5) = [0.1 -1.0]
• Input variable 6, x 6 = Beginning point of pipeline relative to image center point
Input variable 6 fuzzy term set, T(x 6 ) = {Left, Center, Right}
Input variable 6 universe of discourse, U(x 6) = [0.1 -1.0]
The only fuzzy output
• Output variable 1, y 1 = AUV steering command set point
Output variable 1 fuzzy term set, T(y 1 ) = {Turn left, Go straight, Turn right}
Output variable 1 universe of discourse, V(y 1) = [0 -180]
The input vector, x is
Trang 7Robotics Vision-Based Heuristic Reasoning for Underwater Target Tracking and Navigation 397
x = (x 1 , x 2 , x 3 , x 4 , x 5 , x 6)T (3)The output vector, y is
Gaussian and Ǒ-shaped membership functions are selected in this case to map the input to
the output Gaussian curves depend on two parameters ǔ and c and are represented by
2
( ; , ) exp
22
2( )
2
( ; , , )
22( )
In the above equation, ǔ, a, b and c are the parameters that are adjusted to fit the desired
membership data Typical input variable and output variable membership function plot are
shown in Fig 6 and Fig 7
Fig 5 Illustration of tracking strategy
Trang 8Fig 6 Typical input variable membership function plot
Fig 7 Typical output variable membership function plot
There are totally 13 fuzzy control rules The rule base as shown in Fig 8
Fig 8 Rule viewer for fuzzy controller
In order to obtain a crisp output, the output fuzzy set is then aggregated and fed into a centroid (center of gravity) method defuzzification process The defuzzifier determines the actual actuating signal, y ' as follows
( ) ( )
i B
i B i i
y
y y y
μμ (8)
Trang 9Robotics Vision-Based Heuristic Reasoning for Underwater Target Tracking and Navigation 399
3 Simulation and Experimental Results
The simulation procedures are as follows:
a. Define the envelope curve (working area) of prototype
b. Give the real position and orientation of pipeline defined on a grid of coordinates
c. Predefine the AUV drift tolerance limit (±8.0cm) away from the actual pipeline location
d. Initiate the algorithm
e. AUV navigating paths are recorded and visualized graphically
The algorithm has been tested on computer and prototype simulations For comparative pur poses, the results before and after fuzzy tuning are presented Typical examples of results before fuzzy tuning are shown in Fig 9 and Table 1
Fig 9 AUV path (no proper tuning)
AUV path Actual location
x-axis (cm)
Simulated result x-axis (cm)
Trang 10Typical examples of results after fuzzy tuning are shown in Fig 10 and Table 2
Fig 10 AUV path (with proper tuning)
AUV path Actual location
x-axis (cm)
Simulated result x-axis (cm)
Table 2 Data recorded (with proper tuning)
The simulation results show that the drift within tolerance limit is achievable when proper tuning (training) is applied to the fuzzy system The percentage of drift is considered acceptable , as long as it is less than 100%, since this implies the path is within the boundary The effectiveness of the system has been further demonstrated with different target orientation and lighting conditions
4 Conclusions
This paper introduces a new technique for AUV target tracking and navigation The image processing algorithm developed is capable of extracting qualitative information of the terrain required by human operators to maneuver ROV for pipeline tracking It is interesting to note
Trang 11Robotics Vision-Based Heuristic Reasoning for Underwater Target Tracking and Navigation 401
that fuzzy control system developed is able to mimic human operators’ inherent ability for deciding on acceptable control actions This has been verified experimentally and the result is favourable, i.e within 8.0 cm of drift tolerance limit in a 1.5m x 2.0m working envelope One of the most interesting parts being the system ability to perform target tracking and navigation from the knowledge of interpreting image grabbed in perspective view from the terrain
It should also be noted that the system offer another human-like method of representing human experience and knowledge of operating a ROV, rather than being expressed in differential equations in the common PID-controller Obviously, the system does not require sophisticated image processing algorithm such as Kalman filtering or Hough transform techniques All input variable required are merely an approximate value for mission completion, just like a human vision system The simplicity of the system is further recognized when a priori knowledge of the terrain is not necessary as part of the algorithm Currently a priori knowledge is required by some of the available pipeline tracking techniques such as (Evans, J, et al, 2003) and (Arjuna Balasuriya and Ura, T, 2002) The processing time is therefore reduced
In general the whole computational process for this prototype is complex and it usually takes about 60 seconds to arrive at its desired output for 5 steps (22.5cm for each step), which is not practical for commercial standard requirement that is at least 4 knot (2m/s) of AUV speed Commercial standard requirement of a survey AUV can be found in (Bingham,
D ,2002) However, the proposed system would be a workable concept for its capability to look forward and perceive the terrain from perspective view As illustrated in Fig 11, the perceived conditions from the second image captured could be processed concurrently while the AUV completing the forth and fifth step based on the previous image information This will improve the processing time to support high speed AUV application
In addition, further studies on improving the program structure and calculation steps may help
to achieve better computation time Future development of transputer for parallel processing or higher speed processor can also be expected to bring the system into practical use
Fig 11 AUV path and its image capturing procedure
Trang 125 References
Arjuna Balasuriya & Ura, T (2002) Vision-based underwater cable detection and following
using AUVs, Oceans '02 MTS/IEEE, 29-31 Oct 2002, Vol 3, pp 1582-1587, 2002
Bingham, D , Drake, T , Hill, A , & Lott, R (2002) The Application of Autonomous
Underwater Vehicle (AUV) Technology in the Oil Industry – Vision and Experiences,
FIG XXII International Congress Washington, D.C USA, April 19-26 2002
Crovatot, D , Rost, B , Filippini, M , Zampatot, M & Frezza, R (2000) Segmentation of
underwater Images for AUV navigation, Proceedings of the 2000 IEEE international
conference on control applications, 25-27 September 2000, pp 566-569, 2000
El-Hawary, F & Yuyang, Jing (1993) A robust pre-filtering approach to EKF underwater
target tracking, Proceedings of OCEANS '93 Engineering in Harmony with Ocean,
18-21 Oct 1993, Vol.2, pp 235-240, 1993
El-Hawary, F & Yuyang, Jing (1995) Robust regression-based EKF for tracking underwater
targets, IEEE Journal of Oceanic Engineering, Vol.20, Issue.1, pp.31-41, Jan 1995
Evans, J.; Petillot, Y.; Redmond, P.; Wilson, M & Lane, D (2003) AUTOTRACKER: AUV
embedded control architecture for autonomous pipeline and cable tracking, OCEANS
2003 Proceedings, 22-26 Sept 2003, Vol 5, pp 2651-2658, 2003
Fairweather, A J R , Hodgetts, M A & Greig, A R (1997) Robust scene interpretation of
underwater image sequences, IPA97 Conference, 15-17 July 1997, Conference
Publication No 443, pp 660-664, 1997
Foresti G.L & Gentili S (2000) A vision based system for object detection in underwater
images, International journal of pattern recognition and artificial intelligence, vol 14, no 2,
pp 167-188, 2000
Foresti, G L & Gentili, S (2002) A hierarchical classification system for object recognition in
underwater environments, IEEE Journal of Oceanic Engineering, Vol 27, No 1, pp
66-78, 2002
Howard A , Tunstel E , Edwards D & Carlson Alan (2001) Enhancing fuzzy robot
navigation systems by mimicking human visual perception of natural terrain
traversability, Joint 9 th IFSA World Congress and 20 th NAFIPS International Conference, Vancouver, B.C., Canada, pp 7-12, July 2001
Tascini, G , Zingaretti, P , Conte, G & Zanoli, S.M (1996) Perception of an underwater
structure for inspection and guidance purpose, Advanced Mobile Robot, 1996.,
Proceedings of the First Euromicro Workshop on , 9-11 Oct 1996, pp 24 – 28, 1996
Yang Fan & Balasuriya, A (2000) Autonomous target tracking by AUVs using dynamic
vision, Proceedings of the 2000 International Symposium on Underwater Technology, pp
187-192, 23-26 May 2000
Zadeh L (1999) From computing with numbers to computing with words –from
manipulation of measurements to manipulation of perceptions, IEEE Transactions on
Circuits and Systems, 45(1), pp 105-119, 1999
Trang 13The Surgeon’s Third Hand an Interactive
Robotic C-Arm Fluoroscope
Norbert Binder1, Christoph Bodensteiner1, Lars Matthäus1,
Rainer Burgkart2 and Achim Schweikard1
1Institut für Robotik und Kognitive Systeme, Universität zu Lübeck
2Klinik für Orthopädie und Sportorthopädie, Klinikum Rechts der Isar, TU-München
Germany
1 Introduction
Industry is using robots for years to achieve high working precision at reasonable costs When performing monotonous work, attention of human operators weakens over time, resulting in mistakes This increases production costs and reduces productivity There is also a constant pressure to reduce costs for industrial processes while keeping or increasing their quality
The idea of integrating robots into the OR was born over a decade ago Most of these robots are designed for invasive tasks, i.e they are active tools for medical treatment Some are telemanipulation systems, filtering tremor and scaling the movements of the user Others move according to pre-operatively calculated plans positioning instruments of all kinds Main goal was
to achieve a higher precision in comparison to human surgeons, often ignoring the time- and financial aspect As the economic situation at hospitals becomes more and more strained, economic factors such as costs, time and OR-utilization become more and more important in medical treatment Now, only few systems can fulfil both requirements: increase precision and reduce the duration of an intervention
Fig 1 Robotized C-arm
Trang 14We want to introduce another type of robot which assists the surgeon by simplifying the handling
of everyday OR equipment Main goal is to integrate new features such as enhanced positioning modes or guided imaging while keeping the familiar means of operation and improving workflow The robotic assistance system works in the background until the user wants to use the additional features On base of a common non-isocentric fluoroscopic C-arm we will explain the way from a manually operated device into an interactive fluoroscope with enhanced positioning and imaging functionality We first discuss problems of a common C-arm and present possible solutions We then examine the mechanical structure and derive the direct and inverse kinematics solutions In the next section, we describe how the device was equipped with motors, encoders and controllers Finally, we discuss the results of the functionality study and show ways to improve the next generation of robotized C-arms
2 State of the art
2.1 Classical Manual C-arm
The common C-arms are small mobile X-ray units The X-ray source (XR) and the image intensifier (II) are mounted on a C-shaped carriage system Conventionally there are five serial axes A1 to A5, two of them translational, the rest rotational:
A3 d 3 Trans Sliding carriage Arm-length adjustment
A5 45 Rot Orbital Movement C-Rotation in C-plane
Table 1 The axes and their names and functions
The joint arrangement in combination with the C-shape allows for positioning XR and II around a patient who is lying on the OR-couch Besides of the lift, which is motor-driven, all joints have to be moved by hand one after another: the brake is released, positioning is performed and after reaching the target position, it has to be locked again As shown in Fig
2 , the axes A4 and A5 do not intersect In addition, A5 does not intersect the center-beam of the X-ray cone Therefore, there is no mechanically fixed center of rotation (=isocenter)
Fig 2 A common C-arm with the axes A1 to A5
Trang 15The Surgeon's Third Hand - An Interactive Robotic C-Arm Fluoroscope 405
The overall dimensions and weight of the device are designed for mobility The C-arm used for our experiments can be moved by one person and fits through normal sized doors Thus it can be moved from one intervention scene such as OR or ER to the next and is not bound to one room
2.2 Isocentric and semiautomatic C-arms
Meanwhile, industry paid attention to the medical requirements and developed new devices which can deal with the problem of the missing isocenter The Siemens SIREMOBIL IsoC (Kotsianos et al 2001, Euler et al 2002) is constructed with a mechanical isocenter, i.e A4, A5 and the center-beam intersect in the same point The radius is fixed, i.e the distance of the II to the center of rotation cannot be adjusted As
XR and II had to be mounted inside the C to allow for a 190° rotation of joint five, the size of the C had to be increased Motorization of A5 makes it possible to perform (semi-) automatically data acquisition for 3D reconstruction
Another system which is the base of this project, is the Ziehm Vario 3D It has no mechanical isocenter, but when the orbital axis is moved by the user, it can automatically compensate for offsets with the translational joints (Koulechov et al 2005) This method works for the vertical plane only but it allows for a free selection of the radius within the mechanical limits The newest version can also perform the orbital movement in a full- or semi-automatically motor driven way Similar to the Siemens device, data acquisition for 3D reconstruction is realized this way
2.3 Medical problems
When using a standard C-arm fluoroscope two problems occur: First, it is often desired to obtain several images from the same viewing angle during the operation If the joints were moved after taking the first radiograph, the old position and orientation must be found again Second, if another image from the same region but from a different angle is required, more than one joint must be adjusted in general Even basic movements, such as straight-line (translational) movements in the image plane are difficult to perform manually, due to the fact that three of the joints are rotational Similarly, pure rotations around the region of interest (ROI), if desired, are cumbersome, since two joints are translational, and most of the devices are not isocentric This shows exemplarily the rather complex kinematics construction of standard C-arms At the moment handling is done manually and without guidance, which often leads to wrong positioning and thus to a high number of false radiographs, not to forget the radiation dose
3 Computer Simulation
Completing the mechanical work was judged to take some time Therefore, a software simulation was created to mimic the functionality of the real device (Gross et al 2004) It allow for testing and evaluating the primary idea of the project and of a first series of applications (see also section 6)
Additionally, simulated radiographs of our virtual patient can be generated Image- and landmark-based positioning can be performed the same way as on the real machine now A comparison of the image quality is given in Fig 4 for a radiograph of the hip joint Although the proof of concept study of the mechanical version is completed now, the simulation is still a valuable device for testing new concepts and ideas Algorithms for collision detection and path planning are now developed on this base
Trang 16Fig 3 Screenshot of the simulation GUI
Fig 4 Comparison of real radiograph and simulation for a hip joint
4 Realisation
The C-arm has five serially arranged joints, which limit the ability for free positioning in 3D space to 5 DOF Image-plane and -center can be selected freely but the rotation around the beam axis depends on the first parameters
The challenges of the hardware part of robotization lie in the movement of big masses with high precision and OR acceptable speed Thus, careful selection and integration of motors, controllers and encoders is important for the success and are therefore subject
of discussion in this study
4.1 Proof of concept study
Basis of our work was a C-arm with motorized lift (joint one) and arm-stretch (joint three) Position encoders were included for these joints, too, but also for the orbital movement (joint
Trang 17The Surgeon's Third Hand - An Interactive Robotic C-Arm Fluoroscope 407
five) Communication to an integrated PC via SSI allowed for isocentric movements in the vertical plane (Koulechov et al 2005)
For our project, a fully robotized device was required (Binder et al, 2005) In a first step, we measured the forces and torques during acceleration and motion On this information, drives, gears, and encoders were selected Due to lack of space inside the device, external mounting was favoured This also guarantees a high flexibility in changing and adapting single components for improvement throughout the development process
Communication to the control PC was established via a fieldbus system and a programmable control-unit on a PCI-card The existing communication had to be replaced
as it could not fulfill the requirements of our system and did not offer the same comfortable interface for programming
Fig 5 Joints two, four and five with external drives, gears and encoders
After implementation of some test routines, the applications were transferred from simulation onto the real system The tests were successful and indicated the high potential
of the motorized system (Binder et al 2006) Some of the results are shown in the application-section Knowing about the precision required for positioning for some of our applications like 3D reconstruction, we developed a setup that allowed for registrating the real C-arm to the kinematical model Positions and orientations were measured by an infrared tracking system and then compared to those we calculated
4.2 Experiences
Positioning the C-arm with our software can be done easily and shows that the idea will work out Therefore it is sure that a minimum of costs and effort can help to improve the workflow in mobile X-ray imaging
Nevertheless, there are still some problems to overcome The weights which have to be moved here require a high gear reduction Moving the joints manually is therefore hard
to manage as it requires a lot of strength, that would damage the gears We will integrate a different coupler in the next generation Meanwhile the semi-manual handling, i.e motor-assisted movements, which were in the first place integrated for weight compensation, performs very promising and will be part in the next generation, too Additionally, effects of mechanical deformation and torsion during positioning are too big, too, and cannot be ignored Right now we are working on several methods to compensate for these errors The results will be published later High masses in combination with friction also influence the positioning accuracy Especially joints two and four are subject of further improvement For better motor-controlling and to minimize effects of torsion and elasticity in the drive-axis and belts, the position encoders will be positioned at the driven part as originally planed
The next generation of our C-arm will integrate the experiences we have obtained and allow for further evaluation
Trang 185 Kinematics
Now that the C-arm joints can be moved motor driven, some of the applications such as weight compensation can already be realized But performing exact positioning requires knowledge about the relationship between joint parameters and C-arm position and orientation This so-called kinematic problem consists of two parts The direct kinematics, i.e calculating position of the ROI and orientation of the beam from given joint parameters can be derived by setting up the DH-matrices and multiplying those one after another These basics are described e.g in (Siegert, HJ & Bocioneck S 1996) For most applications ROI and beam direction are given from the medical application and with respect to the C-arm base or a previously taken radiograph The joint-parameters for this new position have to be calculated This is the so-called inverse kinematic problem In the following section we want to introduce a geometry based solution
5.1 Inverse kinematics
Conventional C-arms have five joints and thus are limited to 5 DOFs The consequence
is that the rotation of the radiograph around the center beam depends on the position p
z zz y zy x zx
p m p m p m
Table 2: Information given for the inverse kinematics introduces the basic variables in this calculation They are illustrated in Fig 2 and Fig 7 Axes Fig 7 Axes and vectors used for inverse kinematics
Fig 6 mechanical offsets of the system O4is
the rotational center of the C and O5the
center between image intensifier and
generator
Fig 7 Axes and vectors used for inverse kinematics
Trang 19The Surgeon's Third Hand - An Interactive Robotic C-Arm Fluoroscope 409
O Target (= ROI), O 5 O z in case the default radius is used Oz can be moved
along the center beam to adapt the radius XR l ROI
g* * * Line along arm at height d with direction 1 m*3, intersecting g*z
Table 2 Information given for the inverse kinematics
The idea of this calculation is now to reduce the number of unknown joint parameters by
applying geometrical knowledge and interdependencies
As illustrated in Fig 7., the translational axis g*3 with direction vector m*3 describing the
arm, depends directly on 42 Equating g*3 and g*z allows for expressing the height of the
) sin(
0 0
2 2
1
3 2
t d
m t
zz zy zx z z
m m
m s O O O m s
1
d
) tan(
) tan(
2
2 4
4
zy zx zy zx zz
O O m
Trang 20These functions can now be used to calculate the joint parameters for A4 and A5 Two
planes have to be defined for further work: The C-plane EC with normal vector nC and the
03 g z
4
4 is the angle between the C-plane and the z0g3-plane (see Fig 9) Its sign is decided in
dependency of n*03and the part g*z||n of g *z, which is parallel to n *03:
°¯
°
4
n g for
n g for sign
n z n z
4 is given by the angle between g *z and g *3 To fit into our definitions, this value is
substracted from S2 (see Fig 8) The sign is fixed by extracting the part g* ||3 from g*z,
which is parallel tog*3:
°¯
°
4
3 3
||
3 3
||
||
1 ) (
g g for
g g for sign
The last missing joint parameter is the length of the arm, i.e the horizontal distance between
O2 and O3 (see Fig 10) Due to the mechanics, the following information is given:
[O5,O4] Perpendicular to
z
g * , length a5, in E ,C[O4,O3] Perpendicular to
Fig 10 Projection into the E plane C
To get the correct directions vectors, the vectors are defined:
z
m&3A Part of m&3 perpendicular to m&z
3 A z
m& Part of m&z perpendicular to m&3