| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This chapter describes functions for solving ordinary differential equation (ODE) initial value problems. The library provides a variety of low-level methods, such as Runge-Kutta and Bulirsch-Stoer routines, and higher-level components for adaptive step-size control. The components can be combined by the user to achieve the desired solution, with full access to any intermediate steps.
These functions are declared in the header file ‘gsl_odeiv.h’.
| 25.1 Defining the ODE System | ||
| 25.2 Stepping Functions | ||
| 25.3 Adaptive Step-size Control | ||
| 25.4 Evolution | ||
| 25.5 Examples | ||
| 25.6 References and Further Reading |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The routines solve the general
-dimensional first-order system,
for
. The stepping functions rely on the vector
of derivatives
and the Jacobian matrix,
.
A system of equations is defined using the gsl_odeiv_system
datatype.
This data type defines a general ODE system with arbitrary parameters.
int (* function) (double t, const double y[], double dydt[], void * params)This function should store the vector elements
in the array dydt,
for arguments (t,y) and parameters params.
The function should return GSL_SUCCESS if the calculation
was completed successfully. Any other return value indicates
an error.
int (* jacobian) (double t, const double y[], double * dfdy, double dfdt[], void * params);This function should store the vector of derivative elements
in the array dfdt and the
Jacobian matrix
in the array
dfdy, regarded as a row-ordered matrix J(i,j) = dfdy[i * dimension + j]
where dimension is the dimension of the system.
The function should return GSL_SUCCESS if the calculation
was completed successfully. Any other return value indicates
an error.
Some of the simpler solver algorithms do not make use of the Jacobian
matrix, so it is not always strictly necessary to provide it (the
jacobian element of the struct can be replaced by a null pointer
for those algorithms). However, it is useful to provide the Jacobian to allow
the solver algorithms to be interchanged—the best algorithms make use
of the Jacobian.
size_t dimension;This is the dimension of the system of equations.
void * paramsThis is a pointer to the arbitrary parameters of the system.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The lowest level components are the stepping functions which
advance a solution from time
to
for a fixed
step-size
and estimate the resulting local error.
This function returns a pointer to a newly allocated instance of a stepping function of type T for a system of dim dimensions.
This function resets the stepping function s. It should be used whenever the next use of s will not be a continuation of a previous step.
This function frees all the memory associated with the stepping function s.
This function returns a pointer to the name of the stepping function. For example,
printf ("step method is '%s'\n",
gsl_odeiv_step_name (s));
|
would print something like step method is 'rk4'.
This function returns the order of the stepping function on the previous step. This order can vary if the stepping function itself is adaptive.
This function applies the stepping function s to the system of equations defined by dydt, using the step size h to advance the system from time t and state y to time t+h. The new state of the system is stored in y on output, with an estimate of the absolute error in each component stored in yerr. If the argument dydt_in is not null it should point an array containing the derivatives for the system at time t on input. This is optional as the derivatives will be computed internally if they are not provided, but allows the reuse of existing derivative information. On output the new derivatives of the system at time t+h will be stored in dydt_out if it is not null.
If the user-supplied functions defined in the system dydt return a
status other than GSL_SUCCESS the step will be aborted. In this
case, the elements of y will be restored to their pre-step values
and the error code from the user-supplied function will be returned.
The step-size h will be set to the step-size which caused the error.
If the function is called again with a smaller step-size, e.g.
,
it should be possible to get closer to any singularity.
To distinguish between error codes from the user-supplied functions and
those from gsl_odeiv_step_apply itself, any user-defined return
values should be distinct from the standard GSL error codes.
The following algorithms are available,
Embedded Runge-Kutta (2, 3) method.
4th order (classical) Runge-Kutta.
Embedded Runge-Kutta-Fehlberg (4, 5) method. This method is a good general-purpose integrator.
Embedded Runge-Kutta Cash-Karp (4, 5) method.
Embedded Runge-Kutta Prince-Dormand (8,9) method.
Implicit 2nd order Runge-Kutta at Gaussian points.
Implicit 4th order Runge-Kutta at Gaussian points.
Implicit Bulirsch-Stoer method of Bader and Deuflhard. This algorithm requires the Jacobian.
M=1 implicit Gear method.
M=2 implicit Gear method.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The control function examines the proposed change to the solution produced by a stepping function and attempts to determine the optimal step-size for a user-specified level of error.
The standard control object is a four parameter heuristic based on
absolute and relative errors eps_abs and eps_rel, and
scaling factors a_y and a_dydt for the system state
and derivatives
respectively.
The step-size adjustment procedure for this method begins by computing
the desired error level
for each component,
and comparing it with the observed error
. If the
observed error E exceeds the desired error level D by more
than 10% for any component then the method reduces the step-size by an
appropriate factor,
where
is the consistency order of the method (e.g.
for
4(5) embedded RK), and
is a safety factor of 0.9. The ratio
is taken to be the maximum of the ratios
.
If the observed error
is less than 50% of the desired error
level D for the maximum ratio
then the algorithm
takes the opportunity to increase the step-size to bring the error in
line with the desired level,
This encompasses all the standard error scaling methods. To avoid
uncontrolled changes in the stepsize, the overall scaling factor is
limited to the range
to 5.
This function creates a new control object which will keep the local
error on each step within an absolute error of eps_abs and
relative error of eps_rel with respect to the solution
.
This is equivalent to the standard control object with a_y=1 and
a_dydt=0.
This function creates a new control object which will keep the local
error on each step within an absolute error of eps_abs and
relative error of eps_rel with respect to the derivatives of the
solution
. This is equivalent to the standard control
object with a_y=0 and a_dydt=1.
This function creates a new control object which uses the same algorithm
as gsl_odeiv_control_standard_new but with an absolute error
which is scaled for each component by the array scale_abs.
The formula for
for this control object is,
where
is the
-th component of the array scale_abs.
The same error control heuristic is used by the Matlab ODE suite.
This function returns a pointer to a newly allocated instance of a control function of type T. This function is only needed for defining new types of control functions. For most purposes the standard control functions described above should be sufficient.
This function initializes the control function c with the parameters eps_abs (absolute error), eps_rel (relative error), a_y (scaling factor for y) and a_dydt (scaling factor for derivatives).
This function frees all the memory associated with the control function c.
This function adjusts the step-size h using the control function
c, and the current values of y, yerr and dydt.
The stepping function step is also needed to determine the order
of the method. If the error in the y-values yerr is found to be
too large then the step-size h is reduced and the function returns
GSL_ODEIV_HADJ_DEC. If the error is sufficiently small then
h may be increased and GSL_ODEIV_HADJ_INC is returned. The
function returns GSL_ODEIV_HADJ_NIL if the step-size is
unchanged. The goal of the function is to estimate the largest
step-size which satisfies the user-specified accuracy requirements for
the current point.
This function returns a pointer to the name of the control function. For example,
printf ("control method is '%s'\n",
gsl_odeiv_control_name (c));
|
would print something like control method is 'standard'
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The highest level of the system is the evolution function which combines
the results of a stepping function and control function to reliably
advance the solution forward over an interval
. If the
control function signals that the step-size should be decreased the
evolution function backs out of the current step and tries the proposed
smaller step-size. This process is continued until an acceptable
step-size is found.
This function returns a pointer to a newly allocated instance of an evolution function for a system of dim dimensions.
This function advances the system (e, dydt) from time
t and position y using the stepping function step.
The new time and position are stored in t and y on output.
The initial step-size is taken as h, but this will be modified
using the control function c to achieve the appropriate error
bound if necessary. The routine may make several calls to step in
order to determine the optimum step-size. An estimate of
the local error for the step can be obtained from the components of the array e->yerr[].
If the step-size has been changed the value of h will be modified on output.
The maximum time t1 is guaranteed not to be exceeded by the time-step. On the
final time-step the value of t will be set to t1 exactly.
If the user-supplied functions defined in the system dydt return a
status other than GSL_SUCCESS the step will be aborted. In this
case, t and y will be restored to their pre-step values
and the error code from the user-supplied function will be returned. To
distinguish between error codes from the user-supplied functions and
those from gsl_odeiv_evolve_apply itself, any user-defined return
values should be distinct from the standard GSL error codes.
This function resets the evolution function e. It should be used whenever the next use of e will not be a continuation of a previous step.
This function frees all the memory associated with the evolution function e.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The following program solves the second-order nonlinear Van der Pol
oscillator equation,
This can be converted into a first order system suitable for use with
the routines described in this chapter by introducing a separate
variable for the velocity,
,
The program begins by defining functions for these derivatives and
their Jacobian,
|
For functions with multiple parameters, the appropriate information can be passed in through the params argument using a pointer to a struct.
The main loop of the program evolves the solution from
at
to
. The step-size
is
automatically adjusted by the controller to maintain an absolute
accuracy of
in the function values y.
To obtain the values at regular intervals, rather than the variable
spacings chosen by the control function, the main loop can be modified
to advance the solution from one point to the next. For example, the
following main loop prints the solution at the fixed points
,
for (i = 1; i <= 100; i++)
{
double ti = i * t1 / 100.0;
while (t < ti)
{
gsl_odeiv_evolve_apply (e, c, s,
&sys,
&t, ti, &h,
y);
}
printf ("%.5e %.5e %.5e\n", t, y[0], y[1]);
}
|
It is also possible to work with a non-adaptive integrator, using only
the stepping function itself. The following program uses the rk4
fourth-order Runge-Kutta stepping function with a fixed stepsize of
0.01,
|
The derivatives must be initialized for the starting point
before the first step is taken. Subsequent steps use the output
derivatives dydt_out as inputs to the next step by copying their
values into dydt_in.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Many of the basic Runge-Kutta formulas can be found in the Handbook of Mathematical Functions,
The implicit Bulirsch-Stoer algorithm bsimp is described in the
following paper,
| [ << ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This document was generated by Autobuild on September, 26 2007 using texi2html 1.78.