Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Thesis - Beaver simulation

.pdf
Скачиваний:
38
Добавлен:
19.02.2016
Размер:
5.22 Mб
Скачать

Appendix D

in

methods are therefore called predictor-corrector methods. A predictor-cor- rector scheme for calculating yn+l is as follows:

1-

 

 

( 0 )

 

Use the predictor to calculate y,+ l, an initial approximation to Y , + ~Set.

2 -

i = 0.

 

 

 

Evaluate the derivative function and set fit)l = f ( y n + , t n + l ) .

 

 

 

 

(i

3 -

Calculate

a better

( i + 1)

using the corrector formula

approximation y n + l

4 -

( i + l )

( i )

> some specified error tolerance, then increment i

If I y n + l

- y n + l I

 

by 1and go to step

2; otherwise, set y n + ,

(i +1)

 

= y n + , .

The ADAMS integrator is a predictor-corrector multistep method, contained in SIMULINKA. number of different Adams methods can be found in the literature, but the most important ones are Adams-Bashforth and AdamsMoulton, which will be described here.

The Adams-Bashforth method.

The Adams-Bashforth integrator is an explicit multistep method. It can be derived in a number of different ways, see ref.[15]. The derivation is too complicated for a brief treatment, so only the resulting expressions will be presented here. The k-step Adams-Bashforth formula can be written as:

k

Yn+1 = Yn + C P k i f n + l - i

i = l

where:

and:

1

Although equations (D-26) and (D-27) can be used to determine the coefficients Pkj and yj , they do not give the most convenient form for this. Frequently the method of generating functions is used [15]. This method yields the following recursive expression:

Table D-2 gives the resulting set of coefficients yj and Pkj.

pki*.

Table D-2. Coefficients for Adams-Bashforthmethods.

The Adams-Moulton method.

The Adams-Moulton integrator is an implicit multistep method. Its derivation can be found ref.[l5]. The resulting method can be written as:

where:

and:

1

The coefficients can again be found by use of generating functions. Table D-3 lists some vaiues of yj* and

There are two important differences between the Adams-Bashforth and Adams-Moulton methods. The first is that the coefficients of the latter are smaller, which results in smaller round-off and smaller discretization errors. The second difference is that for the same order the Adams-Moulton method uses information from fewer points. The k-step Moulton method is thus of order k + 1,but the k-step Bashforth method is of order k only [15].

Appendix D

179

Table D-3. Coefficients for Adams-Moulton method.

Predictor-corrector Adams method.

Often, the Adams-Bashforth method is used as predictor and the AdamsMoulton method as corrector. An example of a good predictor-corrector method is the fourth order Adams method, which uses the fourth order Bashforth and Moulton formulas:

predictor:

yn+ 1

= Yn + -(55fnh

- 59fn-1 + 37fn-2 - 9fn-3)

 

 

24

 

corrector:

yn+ 1

= yn + -h (9fn+1+ 19fn - 5fn-1 f n - 2 )

 

 

24

 

The SIMULINKintegrator ADAMS most likely uses some combination of Bashforth and Moulton methods. This integrator is particularly suited for systems which are smooth and nonlinear, but do not have widely varying time constants [4]. The Adams routines do not give enough accuracy for digital control purposes (systems with mixtures of continuous and discrete dynamics). This is especially true when advanced adaptive and parameter estimation techniques are used [29].

ADAMS can take a n undetermined number of steps between two output points. It is therefore not possible to convert this method to a fixed step size method by setting the maximum step size equal to the minimum step size in SIMULINKThe. accuracy of the method is not affected by maximum and minimum step sizes; for ADAMS these user-defined parameters are used only to control the output points for plotting the results.

d. Extrapolation methods.

The predictor formulas, described earlier, can be regarded a s methods, where Y , + ~is extrapolated from known previous values of yi and fi. Other kinds of extrapolation have been implemented in various extrapoZati.on methods. These methods will not be discussed here, because they are not used in SIMULINKSee. ref.[l5] for more information.

D.2.3 Stiff differential equations.

Some problems are not handled very efficiently by the methods mentioned so far. A special class of differential equations, which occurs in many physically important situations, is called 'stiff. Stiffness can roughly be defined as the presence of one or more fast decay processes in time, with a time constant that is short compared to the time span of interest (refs.[l4] and [15]). The time constant is defined as the time in which a solution to a differential equation decays by a factor l/e.

In a physical system, different elements often have different time constants, which means that some solutions to differential equations decay much faster than others. In such cases, the signals with fast dynamics will determine the stability of the integration method, even though these components may have decayed to insignificant levels. After a short time, the fast components are negligibly small compared to the slow components. From this point on conventional numerical methods must take very small time steps because of the fast components, although only the slow components contain any significant information a t that time. This is the problem of stiff equations.

Example D-1 (reproduced from refs.1141and [15]).

Consider the system:

The eigenvalues of the matrix of coefficients:

are -1 and -1000. If u(0) = 1 and v(0) = 1, the solution is:

After a very short time, the solution can be approximated by:

When using Euler's method to solve this system, the discretized solution becomes:

For h = 0.01, the numerical solution behaves as disastrous as shown i n figure D-4, which shows the family of solutions of a stiff problem. As i n figure D-1, the numerical solution moves from one curve to another. Although the system is stable, the numerical solution diverges rapidly. The only way to solve this problem is reducing the step size, but eventually, roundoff and discretization errors will accumulate

enough

to result

in another instability. The transient part of

the

solution,

which

decays

very fast,

prevents an increase i n step size, although

the

solution

is very

smooth after only a few seconds!

 

 

0

SIMULINKcontains two integrators which are particularly suited for solving stiff ODES: GEAR, and LINSIM [4].

The integrator LINSIM.

LINSIM works well for linear or 'almost' linear systems, with a very wide range of time constants. The method divides the system in linear and nonlinear subsystems and simulates only the nonlinear system dynamics. The linear part of the system is then solved separately, in a much faster way. The method uses variable step sizes, but setting the maximum step size equal to the minimum step size yields a fixed step method.

The integrator GEAR.

GEAR works well with systems which are smooth, nonlinear, and stiff. The method is less efficient than others for non-stiff systems. It may not work well when there are singularities in the system or if the system is perturbed by rapidly changing inputs. GEAR uses a predictor-corrector method which takes a variable number of steps between two output points. It can not be converted to a fixed step method by setting the maximum step size equal to the minimum step size. In ref.[l5], a multistep method that is suited for solving stiff systems is introduced. A detailed discussion of concepts such as stiff stability is also included in ref.[l5]. The stiffly stable multistep method can be written as:

The coefficients aiand $, are listed in table D-4. The method is closely related to the Adams method. Many different implementations of this stiffly stable multistep method exist nowadays. They differ in the ways in which the implicit expression is solved. It is not known how SIMULINKhandles this problem, because of the inaccessability of the source codes. Nevertheless, all implementations are based upon equation (D-34).

Table D-4. Coefficients for stiffly stable method (GEAR).

In SIMULINK,it is possible to use a combination of ADAMS and GEAR. SIMULINKwill automatically determine if the system is stiff, in which case it will use GEAR, or not, in which case it will use ADAMS.

D.2.4 Differential algebraic equations.

The most generalized system equations of an aircraft can be written as:

(see ref.[ll]). These equations can be formulated a s implicit differential equations:

The presence of algebraic equations in variables of which the derivatives are defined in an explicit differential equation also makes these equations implicit. Such equations are called 'Differential Algebraic Equations' (DAEs). It is

Appendix D

183

not always possible to transfer these DAEs to ODES. The integration method of Gear could, in theory, be used for solution of DAEs [B], but unfortunately, this is not bet?) possible in SIMULINK.

D.3 Simulation of digital controllers.

The time responses of the linear time-invariant continuous system:

can easily be obtained by discretizing and then using the resulting recursion:

This method is, for instance, used by the MATLABcommand LSIM from the CONTROLSYSTEMSTOOLBOXThis. approach is fast, but it generates information on the sample times only. For the general nonlinear time-varying system, numerical integration routines as described in section D.2 must be used. If a digital controller is used to control continuous-time systems, the scheme shown in figure D-5 should be used. This figure assumes a Zero Order Hold; thus, the control input u(t) is updated a t each t = kT, and then hold constant until t = (k+l)T,. The continuous dynamics are contained in the block f (x, x, t) .They are integrated using a Runge-Kutta integrator (the Adams methods are not accurate enough for simulating discrete systems).

no

STOP

no

Continuous

START

plant dynamics

 

Figure D-5.Digital control simulation scheme (seeref.[29]).

Note that two time intervals are involved: the sampling period Ts and the Runge-Kutta integration period hn 6 Ts .This technique provides x(t) even a t points between sample instants (in fact, it provides x(t) a t multiples of T,), which is essential in verifying acceptable intersample behaviour of the closed-loop system prior to implementing the digital controller on the actual plant. Even though the closed-loop behaviour is acceptable a t the sample points, with improper digital control control system design there can be serious problems between the samples, because a badly designed controller can destroy observability, so that poor intersample behaviour is not apparent a t the sample points [29].

D.4 Obtaining state models from transfer functions.

Simulation programs like SIMULINKuse numerical integration routines to obtain state trajectories, so it is necessary that the systems are written as a set of linear or nonlinear state equations. For this reason, transfer functions need to be transformed into state models. Since this transformation is normally performed automatically by the simulation program, the user doesn't need to worry about it. However, it is still useful to know the principles behind this transfomation, and a t the end of this section it will be shown that it is sometimes convenient to do this transformation by hand.

A number of different transformations from transfer function to state space models can be found in the literature (see for instance refs.[B], [9], or [29]). Only one method will be shown here. Consider the following transfer function:

To transform this equation into a state model, the equation will first be rewritten and a helpvariable V(s) will be introduced:

This yields the following equations:

and:

Equations (D-41) and (D-42) can be used to obtain a state model. Figure D-6 shows the block-diagram equivalent of the transfer function (D-39). The

Appendix D

185

diagram is built up from first order integrals (11s blocks) and gain blocks only. If the state vector for this transfer function block is defined as:

the following linear state equation is found:

with:

A =

Figure D-6. Block-diagram equivalent of transfer function (D-39).

As said before, the simulation package will usually do this transformation automatically. However, if the coefficients a. , ... ,amp, awl, or bo ,...b, , b,, are varying during the simulation (see for instance the Dryden turbulence filters (B-20) to (B-22), where the coefficients of the filters are dependent on Lg = LJH), ag= ag(H), and V), it may be necessary to perform the transformation by hand, using gain scheduling within the block-diagram

D-6I).

D.5 Algebraic loops.

One difficulty which can arise during simulation of a continuous time system on a digital computer is the occurance of Algebraic Loops. Consider a system in block-diagram representation. Algebraic, or implicit, loops occur when two or more blocks with direct feedthrough of their inputs form a feedback loop. I t shall be shown that these loops have to be solved with a n iterative algorithm, which considerably slows down the simulation.

The model which has to be simulated is in fact a parallel system: all variables change simulaneously. But the calculation of responses of a parallel system on a digital computer system (with one microprocessor) can only be done sequentially. The ordening of the different blocks then becomes important. This will be illustrated with a n example from ref.181.

Example 0-2.

Consider the system i n figure 0-7,consisting of three gains: A = 1, B = 2, and C = 3. Two calculation sequences will be evduated: uABC and uCBA. The signals u, uA, uB and uc are calcLllated as follows:

Figure 0-7. Model, consisting of three gains.

Gain scheduling can easily be implemented in SIMULINK,using a (nonlinear) function block, representing the varying gain, and a product block. See appendix F.

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]