1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Dynamic Vision for Perception and Control of Motion - Ernst D. Dickmanns Part 8 ppt

30 395 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 30
Dung lượng 677,99 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

6.4 Recursive Estimation Techniques for Dynamic Vision 195 6.4.2.3 Complete Recursion Loop in Kalman Filtering These results are summarized in the following table as algorithmic steps f

Trang 1

6.4 Recursive Estimation Techniques for Dynamic Vision 195

6.4.2.3 Complete Recursion Loop in Kalman Filtering

These results are summarized in the following table as algorithmic steps for the sic version of the extended Kalman filter for real-time vision (4-D approach):

ba-1 Find a good guess for the n initial state components x*(0) to start with [initial hypothesis k = 0, xˆ0 x* (0)]

2 Find an estimate for the probability distribution of this initial state in terms of the first and second moments of a Gaussian distribution (mean valueˆx and0

the (n·n) error covariance matrix P0) The diagonal terms are the components

Entry point for recursively running loop

4 Increment time index k = k+1;

5 Compute expected values for state variables at time k+1 (state prediction x*(k)): x*k A k(  ˜1) xˆk1B k(  ˜1) u k1

6 Predict expected error covariance matrix P*(k) (components: state prediction

and noise corruption): P*( )k A k(  ˜1) P k(  ˜1) A k T(  1) Q k( 1)

7 Compute the expected measurement values * ( )y k h x[ * ( ),k p m] and the

(total) Jacobian matrix C = ˜y*/ ˜x|N as first-order approximations around this

Go back to step 4 for next loop.

Steps for monitoring convergence and progress in the vision process will be cussed in connection with applications

Trang 2

dis-196 6 Recursive State Estimation

6.4.3 The Stabilized Kalman Filter

Equation 6.17 is not well conditioned for use on a computer with limited word length Canceling of significant digits may lead to asymmetries not allowed by the

definition of P* Numerically better conditioned is the following equation (I =

6.4.4 Remarks on Kalman Filtering

Filter tuning, that is, selecting proper parameter settings for the recursive rithm, is essential for good performance of the method A few points of view will

algo-be discussed in this section

6.4.4.1 Influence of the Covariance Matrices Q and R on the Gain Matrix K The basic effects of the influence of the covariance matrices Q and R on the gain

matrix K can be seen from the following simple scalar example: Consider the case

of just one state variable which may be measured directly as the output variable r:

2 y

x

ı , i.e., very good

measurements, a K value just below 1 results; for example,

ˆr(k) r* ( )k 0.95{ ( )r k m r* ( )}k 0.05 * ( )r k 0.95 ( ).r k m (6.24)

This tells us for example, that when initial conditions are poor guesses and

meas-urement data are rather reliable, the R elements should be much smaller than the

Trang 3

Q-6.4 Recursive Estimation Techniques for Dynamic Vision 197

elements But when the dynamic models are good and measurements are rather noisy, they should be selected the other way round For >> , i.e., for very poor

measurement data, a small filter gain factor K results; for example,

2 y

These considerations carry over to the multivariate case

6.4.4.2 Kalman Filter Design

For the observer (not treated here), the specification of the filter gain matrix is achieved by assigning desired eigenvalues to the matrix (pole positions) In the

Kalman filter, the gain matrix K(k) is obtained automatically from the covariance matrices Q(k) and R(k), assumed to be known for all tk , as well as from the error covariance matrix P*0.

K(k) results from P*0, R(k) and Q(k): P*0 may be obtained by estimating the

quality of in-advance-knowledge about x0 (how good/certain is x0?) R(k) may

eventually be derived from an analysis of the measurement principle used On the

contrary, finding good information about Q(k) is often hard: Both the inaccuracies

of the dynamic model and perturbations on the process are not known, usually

In practice, often heuristic methods are used to determine K(k): Initially, change Q(k) only, for fixed R(k), based on engineering judgment Then, after achieving useful results, also change R(k) as long as the filter does not show the desired per-

formance level (filter tuning)

The initial transient behavior is essentially determined by the choice of P*0: The closer P*0 is chosen with respect to (the unknown) P*(k), the better the conver- gence will be However, the larger initial covariance values P*0 selected, the more

will measured values be taken into account (Compare the single variable case:

Large initial values of p*(k) resulted in K-values close to 1, => strong influence of

the measured values.) But note that this can be afforded only if measurement noise

R is not too large; otherwise, there may be no convergence at all Filter tuning is

often referred to as an art because of the difficulties in correctly grasping all the complex interrelationships

Trang 4

198 6 Recursive State Estimation

Gauss-Markov estimators with postfiltering are not discussed here; in this proach, in which more measurement equations than state variables have to be available, the noise corrupted state variables arrived at by a Gauss-Markov estima-tor are filtered in a second step Its advantage lies in the fact that for each state variable a filter factor between 0 and 1 can be set separately Details on several re-cursive filter types may be found in the reference [Thrun et al 2005]

ap-6.4.5 Kalman Filter with Sequential Innovation

At time tk, the mk measurement values are obtained:

-1( ) * ( ) T( )[ ( ) * ( ) T( ) ( )]

K k P k ˜C k C k ˜P k ˜C k  R k

is obtained by inversion of a matrix of dimension mk × mk The central statement

for the so-called “sequential innovation” is

Under certain conditions, an m k -dimensional measurement vector can always be treated like m k scalar single measurements that are exploited sequentially; this re- duces the matrix inversion to a sequence of scalar divisions

measure-In image processing, different features derived from video signals can be sumed to be uncorrelated if the attributes of such features are gained by different

as-processors and with different algorithms Attributes of a single feature (e.g., the and z- components in the image plane) may well be correlated.

y-6.4.5.2 Algorithm for Sequential Innovation

The first scalar measurement at a sampling point tk,

Here, c1 is the first row (vector) of the Jacobian matrix C(k) corresponding to y1.

After this first partial innovation, the following terms may be computed:

Trang 5

6.4 Recursive Estimation Techniques for Dynamic Vision 199

1 ˆ ( ) * ( ) ˆ1( ), x k x k dx k (improved estimate for the state) P1 P0k c P1 1 0 (improved error covariance matrix) (6.31) For the I–th partial innovation -1 0 ˆ ˆ ˆ dx I dx I k y I( Iy* ), I I 1, , m k, dx (6.32) momentarily the best estimate for yI has to be inserted This estimate is based on the improved estimated value for x: -1 -1 ˆ ˆ * ( )I I ( ) * ( ) I ; x k x k x k dx (6.33) it may be computed either via the complete perspective mapping equation ( ) [ * ( )], I I I y k h x k (6.34) or, to save computer time, by correction of the predicted measurement value (at tk-1 for tk) y*I,0 : ,0 ˆ-1 * ( )I *I ( ) I y k y k c dx˜ I ; (simplification: cI ~ const for t = tk) (6.35) With this, the complete algorithm for sequential innovation is Initialization: ,0 2 2 1 ,0 0 0 * [ * ( ) ], 1, ., ,

{ }, [ ] ,

* / , 1, ., ,

ˆ 0, * ( )

I I k I I I I I I k y g x k I m s E w y g x w c dy dx I m dx P P k  (6.36) Recursion for I = 1, … , mk: ,0 -1 2 -1 -1 -1 -1 -1 ˆ * * , (simplification: ~ const for ) /( ),

ˆ ˆ ( * ), ,

ˆ ˆ Final step at : ( ) * ( ) , ( )





(6.37)

6.4.6 Square Root Filters

Appreciable numerical problems may be encountered even in the so-called stabi-lized form, especially with computers of limited word length (such as in navigation computers onboard vehicles of the not too far past): Negative eigenvalues of P(k) for poorly conditioned models could occur numerically, for example,

 very good measurements (eigenvalues of R small compared to those of P*), amplified, for example, by large eigenvalues of P0;

 large differences in the observability of single state variables, i.e large differ-ences in the eigenvalues of P(k).

These problems have led to the development of so-called square root filters [Pot-ter 1964] for use in the Apollo onboard computer The equations for recursive

com-putation of P are substituted by equations for a recursion in the square root matrix

PP

1/2

The filter gain matrix K then is directly computed from P1/2P This has the ad-vantage that the eigenvalues allowable due to the limited word length of the

co-variance matrices P and R are increased considerably For example, let the co-variance

Trang 6

200 6 Recursive State Estimation

of the state variable “position of the space vehicle” be 2 m

x = 10

- 4

2 and the variance

of the measurement value “angle”: 2 rad

y = 10

s 2 The numerical range for P , R

then is 1010, while the numerical range for PP

1/2

, R1/2 is 105.For modern general-purpose microprocessors, this aspect may no longer be of concern However, specialized hardware could still gain from exploiting this prob-lem formulation

What is the “square roots of a matrix”? For each positive semidefinite matrix P there exist multiple roots PP

“upper” triangular matrices U' are being used: P = U' U'T Their disadvantage is the relative costly computation of at least n scalar square roots for each recursion step This may be bypassed by using triangular matrices with 1s on the diagonal (unitar-

ian upper triangular matrices U) and a diagonal matrix D that may be treated as a

vector

6.4.6.1 Kalman Filter with UDU T -Factorized Covariance Matrix

This method has been developed by Thornton and Bierman [Thornton, Bierman 1980] It can be summarized by the following bullets:

x Use the decomposition of the covariance matrix

T

Due to the property U D1/2 = U', this filter is considered to belong to the class of

square root filters

x In the recursion equations, the following replacement has to be done:

* by * * * T

Modified innovation equations

x Starting from the sequential formulation of the innovation equations (with ki as column vectors of the Kalman gain matrix and ci as row vectors of the Jacobian

matrix)

2 -1 -1T/( T ),

Trang 7

6.4 Recursive Estimation Techniques for Dynamic Vision 201

a scalar q, called the innovation covariance,

8 Set U i1, U i and D i 1,  for the next measurement value

9 Repeat steps 1 to 8 for i = 1, … , mk.

10 Set U U m k and D D m kas starting values for the prediction step

Extrapolation of the covariance matrix in UDU T -factorized form:

The somewhat involved derivation is based on a Gram-Schmidt vector nalization and may be found in [Maybeck 1979, p 396] or [Thornton, Bierman 1977]

orthogo-x The starting point is the prediction of the covariance matriorthogo-x:

Trang 8

202 6 Recursive State Estimation

00

«

D D

D »

N

a d

dimension: 2n × 2n (6.52)

x The recursion obtained then is

Initialization: [ , , a1 a2 , a n ] { W T (column vectors ai of the transition

matrix A are rows of W) Recursion backwards for ț = n, n-1, … , 1

ing time was needed with simultaneously improved stability compared to a Kalman filter with sequential innovation

6.4.6.3 General Remarks

With the UDUT-factored Kalman filter, an innovation, that is, an improvement of the existing state estimation, may be performed with just one measurement value (observability given) The innovation covariance in Equation 6.45

is a measure for the momentary estimation quality; it provides information about

an error zone around the predicted measurement value yi* that may be used to

judge the quality of the arriving measurement values If they are too far off, it may

be wise to discard this measured value all together

6.4.7 Conclusion of Recursive Estimation for Dynamic Vision

The background and general theory of recursive estimation underlying the 4-D proach to dynamic vision have been presented in this chapter Before the overall integration for complex applications is discussed in detail, a closer look at major components including the specific initialization requirements is in order: This will

Trang 9

ap-6.4 Recursive Estimation Techniques for Dynamic Vision 203

be done for road detection and tracking in Chapters 7 to 10; for vehicle detection and tracking, it is discussed in Chapter 11 The more complex task with many ve-hicles on roads will be treated in Chapter 14 after the resulting system architecture for integration of the diverse types of knowledge needed has been looked at in Chapter 13

The sequence of increasing complexity will start here with an ideally flat road with no perturbations in pitch on the vehicle and the camera mounted directly onto the vehicle body Horizontal road curvature and self-localization on the road are to

be recognized; [this has been dubbed SLAM (self-localization and mapping) since the late 1990s] As next steps, systematic variations in road (lane) width and rec-ognition of vertical curvature are investigated These items have to be studied in conjunction to disambiguate the image inversion problem in 3-D space over time There also is a cross-connection between the pitching motion of the vehicle and es-timation of lane or road width It even turned out that temporal frequencies have to

be separated when precise state estimation is looked for Depending on the loading conditions of cars, their stationary pitch angle will be different; due to gas con-sumption over longer periods of driving, this quasi-stationary pitch angle will slowly change over time To adjust this parameter that is important for visual range estimation correspondingly, two separate estimation loops with different time con-

stants and specific state variables are necessary (actual dynamic pitch angleșV, and

the quasi-stationary bias angleșb This shows that in the 4-D approach, even tle points in understanding visual perception can be implemented straight-forwardly

Recursive estimation is not just a mathematical tool that can be applied in an arbitrary way (without having to bear the consequences), but the models both for motion in the real world and for perspective mapping (including motion blur!) have

to be kept in mind when designing a high-performance dynamic vision system Provisions to be implemented for intelligent control of feature extraction in the task context will be given for these application domains As mentioned before, most gain in efficiency is achieved by looking at the perception and control process

in closed-loop fashion and by exploiting the same spatiotemporal models for all subtasks involved

The following scheme summarizes the recursive estimation loop with sequential

innovation and UDUT-factorization as it has been used in standard form with minor modifications over almost two decades

Complete scheme with recursion loops in sequential Kalman filtering and

matrix D0 now are the variance components ı2i.

3. If the plant noise covariance matrix Q is diagonal, the starting value for Uq is the

iden-tity matrix I, and Q is Q U D U T with D the (guessed or approximately known)

Trang 10

204 6 Recursive State Estimation

values on the diagonal of the covariance matrix E{vTv} In the sequential formulation,

the measurement covariance matrix R = E{wTw} is replaced by the diagonal terms

Entry point for recursively running main loop (over time)

4. Increment time index k = k+1.

5. Compute expected values for state variables at time k+1 [state prediction x*(k)]:

ˆ

*k (  ˜1) k  (  ˜1) k

6 Compute the expected measurement values y* ( )k h x[ * ( ),k p m] and the (total)

Jacobian matrix C = ˜y*/ ˜x|N as first-order approximations around this point

7. Predict expected error covariance matrix P*(k) in factorized form (Equations 6.43 and

6.49):

7.1 Initialize matricesW [A˜U U, q](dimension: n·2n; Equation 6.51)

»and 0 (dimension: 2n·2n, Equation 6.52)

D

Entry point for sequential computation of expected error covariance matrix

7.2 Recursion backward for ț = n, n í1 , … , 1 (Equations 6.53 – 6.56)

go back to step 7.2 for prediction of error covariance matrix

8 Read new measurement values y(k); mk in size (may vary with time k)

9 Recursive innovation for each of the m k measurement values:

go back to 9.1 for next inner innovation loop as long as i ” m k.

(Loop 9 yields the matrices U U

k

m and

k

m

D D for prediction step 7.)

10 Monitor progress of perception process;

go back to step 4 for next time step k = k+1

This loop may run indefinitely, controlled by higher perception levels, possibly triggered

by step 10

Trang 11

7 Beginnings of Spatiotemporal Road

and Ego-state Recognition

In the previous chapters, we have discussed basic elements of the 4-D approach to dynamic vision After an introduction to the general way of thinking (in Chapter 1), the basic general relations between the real world in 3-D space and time, on one hand, and features in 2-D images, on the other, have been discussed in Chapter 2 Chapter 5 was devoted to the basic techniques for image feature extraction In Chapter 6, the elementary parts of recursive estimation developed for dynamic vi-sion have been introduced They avoid the need for storing image sequences by combining 3-D shape models and 3-D motion models of objects with the theory of perspective mapping (measurement models) and feature extraction methods All this together was shown to constitute a very powerful general approach for dy-namic scene understanding by research groups at UniBwM [Dickmanns, Graefe 1988; Dickmanns, Wünsche 1999]

Recognition of well-structured roads and the egostate of the vehicle carrying the camera relative to the road was one of the first very successful application do-mains This method has been extensively used long before the SLAM–acronym (simultaneous localization and mapping) became fashionable for more complex scenes since the 1990s [Moutarlier, Chatila 1989; Leonhard, Durrant-White 1991; Thrun

et al 2005] This chapter shows the beginnings of spatiotemporal road recognition

in the early 1980s after the initial simulation work of H.G Meissner between 1977 and 1982 [Meissner 1982], starting with observer methods [Luenberger 1964].

Since road recognition plays an essential role in a large application domain, both for driver assistance systems and for autonomously driving systems, the next four chapters are entirely devoted to this problem class Over 50 million road vehicles are presently built every year worldwide With further progress in imaging sensor systems and microprocessor technology including storage devices and high-speed data communication, these vehicles would gain considerably in safety if they were able to perceive the environment on their own The automotive industry worldwide has started developments in this direction after the “Pro-Art” activities of the Euro-pean EUREKA-project “PROMETHEUS” (1987–1994) set the pace [Braess, Reichart 1995a, b] Since 1992, there is a yearly International Symposium on Intelli- gent Vehicles [Masaki 1992++] devoted to this topic The interested reader is re-ferred to these proceedings to gain a more detailed understanding of the develop-ments since then

This chapter starts with assembling and arranging components for temporal recognition of road borderlines while driving on one side of the road It begins with the historic formulation of the dynamic vision process in the mid-

Trang 12

spatio-206 7 Beginnings of Spatiotemporal Road and Ego-state Recognition

1980s which allowed driving at speeds up to ~ 100 km/h on a free stretch of bahn in 1987 with a half dozen 16-bit Intel 8086 microprocessors and 1 PC on board, while other groups using AI approaches without temporal models, but using way more computing power, drove barely one-tenth that speed Remember that typical clock rates of general purpose microprocessors at that time were around 10 MHz and the computational power of a microprocessor was ~ 10-4 (0.1 per mille)

Auto-of what it is today These tests proved the efficiency Auto-of the 4-D approach which terward was accepted as a standard (without adopting the name); it is considered just another extension of Kalman filtering today

af-The task of robust lane and road perception is investigated in Section 7.4 with a special approach for handling the discontinuities in the clothoid parameter C1.Chapter 8 discusses the demanding initialization problem for getting started from scratch The recursive estimation procedure (Chapter 9) treats the simple planar problem first (Section 9.1) without perturbations in pitch, before the more general cases are discussed in some detail Extending the approach to nonplanar roads (hilly terrain) is done in Section 9.2, while handling larger perturbations in pitch is treated in Section 9.3 The perception of crossroads is introduced in Chapter 10 Contrary to the approaches selected by the AI community in the early 1980s

[Davis et al 1986; Hanson, Riseman 1987; Kuan et al 1987; Pomerleau 1989, Thorpe et al 1987; Turk et al 1987; Wallace et al 1986], which start from quasi-static single image interpretation, the 4-D approach to real-time vision uses both shape and motion models over time to ease the transition from measurement data (image features) to

internal representations of the motion process observed

The motion process is given by a vehicle driving on a road of unknown shape; due to gravity and vehicle structure (Ackermann steering, see Figure 3.10), the ve-hicle is assumed to roll on the ground with specific constraints (known in AI as

“nonholonomic”) and the camera at a constant elevation Hc above the ground

Un-der normal driving conditions (no free rotations), vehicle rotation around the cal axis is geared to speed V and steering input Ȝ, while pitch (ș) and roll (bank)

verti-angles (ĭ) are assumed to be small; while pitch verti-angles may vary slightly but not be neglected for large look-ahead ranges, the bank is taken as zero in the aver-age here

can-(ȥ)

7.1 Road Model

The top equation in Figure 3.10 indicates that for small steering angles, curvature

C of the trajectory driven is proportional to the steering angle Ȝ Driving at constant

speed V = dl/dt and with a constant steering rate dȜ/dt as control input, this means

that the resulting trajectory has a linear change of curvature over the arc length

This class of curves is called clothoids and has become the basic element for the design of high-speed roads Starting the description from an initial curvature C0 the general model for the road element clothoid can then be written

0 1˜

with C1 as the clothoid parameter fixing the rate of change of the inverse of the dius R of curvature (C = 1/R) This differential geometry model has only two pa-

Trang 13

ra-7.1 Road Model 207

rameters and does not fix the location x, y and the heading Ȥ of the trajectory With the definition of curvature as the rate of change of heading angle over arc length l (C = dȤ/dl), there follows as first integral

2



F ³C dl˜ F C l˜ C l˜ (7.2) Heading direction is seen to enter as integration constant Ȥ0 The transition to Car-

tesian coordinates x (in the direction of the reference line for Ȥ) and y (normal to it)

is achieved by the sine and cosine component of the length increments l For small

heading changes (” 15°), the sine can be approximated by its argument and the sine by 1, yielding (with index h for horizontal curvature):

The integration constants x0 and y0 of this second integral specify the location of

the trajectory element For local perception of the road by vision, all three

integra-tion constants are of no concern; setting them

to zero means that the remaining description is oriented relative to the tangent to the trajectory

at the position of the vehicle Only the last two terms in Equation 7.3b remain as curvature-dependent lateral offset ǻyC This description is both very compact and convenient Figure 7.1 shows the local trajectory with heading change ǻȤC and lateral offset ǻyC due to curvature

Figure 7.1 From differential

ge-ometry to Cartesian coordinates

ǻȤ

ǻl

ǻx

ǻy C

Roads are pieced together from clothoid elements with different parameters C0 and

C1, but without jumps in curvature at the points of transition (segment boundaries).Figure 7.2 shows in the top part a road built according to this rule; the lower part shows the corresponding curvature over the arc length For local vehicle guidance

on flat ground, it is sufficient to perceive the two curvature parameters of the road

C0h and C1h as well as the actual state of the vehicle relative to the road: lateral

offset y0, lateral velocity v,

and heading angle ȥ manns, Zapp, 1986]

Figure 7.2 Road (upper curve in a bird’s-eye view) as

a sequence of clothoid elements with continuity in

po-sition, heading, and curvature; the curvature change

rate C1 is a sequence of step functions The curvature

over the arc length is a polygon (lower curve)

Trang 14

208 7 Beginnings of Spatiotemporal Road and Ego-state Recognition

7.2 Simple Lateral Motion Model for Road Vehicles

Figure 7.3 shows simplified models based on the more complete one of Chapter 3 Figure 3.10 described the simple model for road vehicle motion (see also Section 3.4.5.2) The lateral extension of the vehicle is idealized to zero, the socalled bicy-cle model (with no bank angle degree of freedom!) Equation 3.37 and the block diagram Figure 3.24 show the resulting fifth order dynamic model Neglecting the dynamic effects on yaw acceleration and slip angle rate (valid for small

í í

í í

+ +

• ȥabs

ȥrel•

ȥrel•

• ȥabs

Figure 7.3 Block diagram for lateral control of road vehicles taking rotational dynamics

around the vertical axis only partially into account (a) Infinite tire stiffness yields simplest model (b) Taking tire force build-up for slip angle into account (not for yaw rates) improves observation results

Neglecting the dynamic term only in the differential equation for yaw tion, the turn rate becomes directly proportional to steering angle (dȥabs/dt) = V·Ȝ

accelera-/a, and the dynamic model reduces to order four The split in omitting tire softness

effects for the yaw rate but not for the slip angle ȕ can be justified as follows: The yaw angle is the next integral of the yaw rate and does not influence forces or mo-ments acting on the vehicle; in the estimation process, it will finally be approxi-mated by the correct value due to direct visual feedback On the contrary, the slip angleȕ directly influences tire forces but cannot be measured directly

Steering rate is the control input u(t), acting through a gain factor kȜ; the steering angle directly determines the absolute yaw rate in the fourth-order model By sub-

tracting the temporal heading rate of the road VǜC0h from this value, the heading

rate ȥ relative to the road results After integrating, subtracting the slip angle,

Trang 15

7.3 Mapping of Planar Road Boundary into an Image 209

and multiplying the sum by speed V, the lateral speed relative to the road v = dyV/dt

is obtained The set of differential equations for the reduced state variables then is

rel

V V

u t C

V V y y

with 1/(2 )a T V a T/ ; V/ k ; a = axle distance; kltf = Fy/(mWL·Įf) (in

m/s²/rad) (lateral acceleration as a linear function of angle of attack between the

tire and the ground, dubbed tire stiffness, see Section 3.4.5.2 For the UniBwM test van VaMoRs, the actual numbers are a = 3.5 m, kltf § 75 m/s²/rad, and Tȕ = 0.0133·V s-1 For a speed of 10 m/s (36 km/h), the eigenvalue of the dynamic tire

mode is 1/Tȕ = 7.5 s-1 (= 0.3 times video rate of 25 Hz); this can hardly be glected for the design of a good controller since it is in the perceptual range of hu-man observers For higher speeds, this eigenvalue becomes even smaller The kltf –value corresponds to an average lateral acceleration buildup of 1.3 m/s2 per degree angle of attack [This is a rather crude model of the vehicle characteristics assum-

ne-ing the cg at the center between the two axles and the same tire stiffness at the

front and rear axle; the precise data are as follows: The center of gravity is 1.5 m in front of the rear axle and 2 m behind the front axle; the resulting axle loads are rear

2286 and front 1714 kg (instead of 2000 each) Due to the twin wheels on each side of the rear axle, the tire stiffness at the axles differs by almost a factor of 2 The values for the simple model selected yield sufficiently good predictions for visual guidance; this resulted from tests relative to the more precise model.]

7.3 Mapping of Planar Road Boundary into an Image

This section has been included for historical reasons only; it is not recommended for actual use with the computing power available today The simple mapping model given here allowed the first autonomous high-speed driving in 1987 with very little computing power needed (a few 16-bit microprocessors with clock rates

of order of magnitude 10 MHz) Only intelligently selected subregions of images

(256×256 pixels) could be evaluated at a rate of 12.5 Hz Moving at speed V along the road, the curvature changes C(l) appear in the image plane of the camera as time-varying curves Therefore, exploiting Equation 7.1 and the relationship V = dl/dt, a simple dynamic model for the temporal change of the image of the road

while driving along the road can be derived

7.3.1 Simple Beginnings in the Early 1980s

For an eye–point at elevation h above the road and at the lane center, a line element

of the borderline at the look-ahead distance L is mapped onto the image plane yB, z

B B according to the laws of perspective projection (see Figure 7.4)

Ngày đăng: 10/08/2014, 02:20

TỪ KHÓA LIÊN QUAN