| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This chapter describes functions for solving linear systems. The
library provides linear algebra operations which operate directly on
the gsl_vector and gsl_matrix objects. These routines
use the standard algorithms from Golub & Van Loan's Matrix
Computations.
When dealing with very large systems the routines found in LAPACK should be considered. These support specialized data representations and other optimizations.
The functions described in this chapter are declared in the header file ‘gsl_linalg.h’.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A general square matrix
has an
decomposition into
upper and lower triangular matrices,
where
is a permutation matrix,
is unit lower
triangular matrix and
is upper triangular matrix. For square
matrices this decomposition can be used to convert the linear system
into a pair of triangular systems (
,
), which can be solved by forward and back-substitution.
Note that the
decomposition is valid for singular matrices.
These functions factorize the square matrix A into the
decomposition
. On output the diagonal and upper
triangular part of the input matrix A contain the matrix
. The lower triangular part of the input matrix (excluding the
diagonal) contains
. The diagonal elements of
are
unity, and are not stored.
The permutation matrix
is encoded in the permutation
p. The
-th column of the matrix
is given by the
-th column of the identity matrix, where
the
-th element of the permutation vector. The sign of the
permutation is given by signum. It has the value
,
where
is the number of interchanges in the permutation.
The algorithm used in the decomposition is Gaussian Elimination with partial pivoting (Golub & Van Loan, Matrix Computations, Algorithm 3.4.1).
These functions solve the square system
using the
decomposition of
into (LU, p) given by
gsl_linalg_LU_decomp or gsl_linalg_complex_LU_decomp.
These functions solve the square system
in-place using the
decomposition of
into (LU,p). On input
x should contain the right-hand side
, which is replaced
by the solution on output.
These functions apply an iterative improvement to x, the solution
of
, using the
decomposition of
into
(LU,p). The initial residual
is also
computed and stored in residual.
These functions compute the inverse of a matrix
from its
decomposition (LU,p), storing the result in the
matrix inverse. The inverse is computed by solving the system
for each column of the identity matrix. It is preferable
to avoid direct use of the inverse whenever possible, as the linear
solver functions can obtain the same result more efficiently and
reliably (consult any introductory textbook on numerical linear algebra
for details).
These functions compute the determinant of a matrix
from its
decomposition, LU. The determinant is computed as the
product of the diagonal elements of
and the sign of the row
permutation signum.
These functions compute the logarithm of the absolute value of the
determinant of a matrix
,
, from its
decomposition, LU. This function may be useful if the direct
computation of the determinant would overflow or underflow.
These functions compute the sign or phase factor of the determinant of a
matrix
,
, from its
decomposition,
LU.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A general rectangular
-by-
matrix
has a
decomposition into the product of an orthogonal
-by-
square matrix
(where
) and
an
-by-
right-triangular matrix
,
This decomposition can be used to convert the linear system
into the triangular system
, which can be solved by
back-substitution. Another use of the
decomposition is to
compute an orthonormal basis for a set of vectors. The first
columns of
form an orthonormal basis for the range of
,
, when
has full column rank.
This function factorizes the
-by-
matrix A into
the
decomposition
. On output the diagonal and
upper triangular part of the input matrix contain the matrix
. The vector tau and the columns of the lower triangular
part of the matrix A contain the Householder coefficients and
Householder vectors which encode the orthogonal matrix Q. The
vector tau must be of length
. The matrix
is related to these components by,
where
and
is the
Householder vector
. This is the same storage scheme
as used by LAPACK.
The algorithm used to perform the decomposition is Householder QR (Golub & Van Loan, Matrix Computations, Algorithm 5.2.1).
This function solves the square system
using the
decomposition of
into (QR, tau) given by
gsl_linalg_QR_decomp. The least-squares solution for rectangular systems can
be found using gsl_linalg_QR_lssolve.
This function solves the square system
in-place using the
decomposition of
into (QR,tau) given by
gsl_linalg_QR_decomp. On input x should contain the
right-hand side
, which is replaced by the solution on output.
This function finds the least squares solution to the overdetermined
system
where the matrix A has more rows than
columns. The least squares solution minimizes the Euclidean norm of the
residual,
.The routine uses the
decomposition
of
into (QR, tau) given by
gsl_linalg_QR_decomp. The solution is returned in x. The
residual is computed as a by-product and stored in residual.
This function applies the matrix
encoded in the decomposition
(QR,tau) to the vector v, storing the result
in v. The matrix multiplication is carried out directly using
the encoding of the Householder vectors without needing to form the full
matrix
.
This function applies the matrix
encoded in the decomposition
(QR,tau) to the vector v, storing the result
in v. The matrix multiplication is carried out directly using
the encoding of the Householder vectors without needing to form the full
matrix
.
This function solves the triangular system
for
x. It may be useful if the product
has already
been computed using gsl_linalg_QR_QTvec.
This function solves the triangular system
for x
in-place. On input x should contain the right-hand side
and is replaced by the solution on output. This function may be useful if
the product
has already been computed using
gsl_linalg_QR_QTvec.
This function unpacks the encoded
decomposition
(QR,tau) into the matrices Q and R, where
Q is
-by-
and R is
-by-
.
This function solves the system
for x. It can
be used when the
decomposition of a matrix is available in
unpacked form as (Q, R).
This function performs a rank-1 update
of the
decomposition (Q, R). The update is given by
where the output matrices
and
are also
orthogonal and right triangular. Note that w is destroyed by the
update.
This function solves the triangular system
for the
-by-
matrix R.
This function solves the triangular system
in-place. On
input x should contain the right-hand side
, which is
replaced by the solution on output.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The
decomposition can be extended to the rank deficient case
by introducing a column permutation
,
The first
columns of
form an orthonormal basis
for the range of
for a matrix with column rank
. This
decomposition can also be used to convert the linear system
into the triangular system
, which can be
solved by back-substitution and permutation. We denote the
decomposition with column pivoting by
since
.
This function factorizes the
-by-
matrix A into
the
decomposition
. On output the
diagonal and upper triangular part of the input matrix contain the
matrix
. The permutation matrix
is stored in the
permutation p. The sign of the permutation is given by
signum. It has the value
, where
is the
number of interchanges in the permutation. The vector tau and the
columns of the lower triangular part of the matrix A contain the
Householder coefficients and vectors which encode the orthogonal matrix
Q. The vector tau must be of length
. The
matrix
is related to these components by,
where
and
is the
Householder vector
. This is the same storage scheme
as used by LAPACK. The vector norm is a workspace of length
N used for column pivoting.
The algorithm used to perform the decomposition is Householder QR with column pivoting (Golub & Van Loan, Matrix Computations, Algorithm 5.4.1).
This function factorizes the matrix A into the decomposition
without modifying A itself and storing the
output in the separate matrices q and r.
This function solves the square system
using the
decomposition of
into (QR, tau, p) given by
gsl_linalg_QRPT_decomp.
This function solves the square system
in-place using the
decomposition of
into
(QR,tau,p). On input x should contain the
right-hand side
, which is replaced by the solution on output.
This function solves the square system
for
x. It can be used when the
decomposition of a matrix is
available in unpacked form as (Q, R).
This function performs a rank-1 update
of the
decomposition (Q, R, p). The update is given by
where the output matrices
and
are also orthogonal and right triangular. Note that w is
destroyed by the update. The permutation p is not changed.
This function solves the triangular system
for the
-by-
matrix
contained in QR.
This function solves the triangular system
in-place
for the
-by-
matrix
contained in QR. On
input x should contain the right-hand side
, which is
replaced by the solution on output.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A general rectangular
-by-
matrix
has a
singular value decomposition (SVD) into the product of an
-by-
orthogonal matrix
, an
-by-
diagonal matrix of singular values
and the transpose of an
-by-
orthogonal square matrix
,
The singular values
are all non-negative and are
generally chosen to form a non-increasing sequence
.
The singular value decomposition of a matrix has many practical uses. The condition number of the matrix is given by the ratio of the largest singular value to the smallest singular value. The presence of a zero singular value indicates that the matrix is singular. The number of non-zero singular values indicates the rank of the matrix. In practice singular value decomposition of a rank-deficient matrix will not produce exact zeroes for singular values, due to finite numerical precision. Small singular values should be edited by choosing a suitable tolerance.
For a rank-deficient matrix, the null space of
is given by
the columns of
corresponding to the zero singular values.
Similarly, the range of
is given by columns of
corresponding to the non-zero singular values.
This function factorizes the
-by-
matrix A into
the singular value decomposition
for
. On output the matrix A is replaced by
. The diagonal elements of the singular value matrix
are stored in the vector S. The singular values are non-negative
and form a non-increasing sequence from
to
. The
matrix V contains the elements of
in untransposed
form. To form the product
it is necessary to take the
transpose of V. A workspace of length N is required in
work.
This routine uses the Golub-Reinsch SVD algorithm.
This function computes the SVD using the modified Golub-Reinsch
algorithm, which is faster for
. It requires the vector work of length N and the
-by-
matrix X as additional working space.
This function computes the SVD of the
-by-
matrix A
using one-sided Jacobi orthogonalization for
. The Jacobi method can compute singular values to higher
relative accuracy than Golub-Reinsch algorithms (see references for
details).
This function solves the system
using the singular value
decomposition (U, S, V) of
given by
gsl_linalg_SV_decomp.
Only non-zero singular values are used in computing the solution. The parts of the solution corresponding to singular values of zero are ignored. Other singular values can be edited out by setting them to zero before calling this function.
In the over-determined case where A has more rows than columns the
system is solved in the least squares sense, returning the solution
x which minimizes
.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A symmetric, positive definite square matrix
has a Cholesky
decomposition into a product of a lower triangular matrix
and
its transpose
,
This is sometimes referred to as taking the square-root of a matrix. The
Cholesky decomposition can only be carried out when all the eigenvalues
of the matrix are positive. This decomposition can be used to convert
the linear system
into a pair of triangular systems
(
,
), which can be solved by forward and
back-substitution.
This function factorizes the positive-definite symmetric square matrix
A into the Cholesky decomposition
. On input
only the diagonal and lower-triangular part of the matrix A are
needed. On output the diagonal and lower triangular part of the input
matrix A contain the matrix
. The upper triangular part
of the input matrix contains
, the diagonal terms being
identical for both
and
. If the matrix is not
positive-definite then the decomposition will fail, returning the
error code GSL_EDOM.
This function solves the system
using the Cholesky
decomposition of
into the matrix cholesky given by
gsl_linalg_cholesky_decomp.
This function solves the system
in-place using the
Cholesky decomposition of
into the matrix cholesky given
by gsl_linalg_cholesky_decomp. On input x should contain
the right-hand side
, which is replaced by the solution on
output.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A symmetric matrix
can be factorized by similarity
transformations into the form,
where
is an orthogonal matrix and
is a symmetric
tridiagonal matrix.
This function factorizes the symmetric square matrix A into the
symmetric tridiagonal decomposition
. On output the
diagonal and subdiagonal part of the input matrix A contain the
tridiagonal matrix
. The remaining lower triangular part of the
input matrix contains the Householder vectors which, together with the
Householder coefficients tau, encode the orthogonal matrix
. This storage scheme is the same as used by LAPACK. The
upper triangular part of A is not referenced.
This function unpacks the encoded symmetric tridiagonal decomposition
(A, tau) obtained from gsl_linalg_symmtd_decomp into
the orthogonal matrix Q, the vector of diagonal elements diag
and the vector of subdiagonal elements subdiag.
This function unpacks the diagonal and subdiagonal of the encoded
symmetric tridiagonal decomposition (A, tau) obtained from
gsl_linalg_symmtd_decomp into the vectors diag and subdiag.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A hermitian matrix
can be factorized by similarity
transformations into the form,
where
is a unitary matrix and
is a real symmetric
tridiagonal matrix.
This function factorizes the hermitian matrix A into the symmetric
tridiagonal decomposition
. On output the real parts of
the diagonal and subdiagonal part of the input matrix A contain
the tridiagonal matrix
. The remaining lower triangular part of
the input matrix contains the Householder vectors which, together with
the Householder coefficients tau, encode the orthogonal matrix
. This storage scheme is the same as used by LAPACK. The
upper triangular part of A and imaginary parts of the diagonal are
not referenced.
This function unpacks the encoded tridiagonal decomposition (A,
tau) obtained from gsl_linalg_hermtd_decomp into the
unitary matrix U, the real vector of diagonal elements diag and
the real vector of subdiagonal elements subdiag.
This function unpacks the diagonal and subdiagonal of the encoded
tridiagonal decomposition (A, tau) obtained from the
gsl_linalg_hermtd_decomp into the real vectors
diag and subdiag.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A general matrix
can be decomposed by orthogonal
similarity transformations into the form
where
is orthogonal and
is an upper Hessenberg matrix,
meaning that it has zeros below the first subdiagonal. The
Hessenberg reduction is the first step in the Schur decomposition
for the nonsymmetric eigenvalue problem, but has applications in
other areas as well.
This function computes the Hessenberg decomposition of the matrix
A by applying the similarity transformation
.
On output,
is stored in the upper portion of A. The
information required to construct the matrix
is stored in
the lower triangular portion of A.
is a product
of
Householder matrices. The Householder vectors
are stored in the lower portion of A (below the subdiagonal)
and the Householder coefficients are stored in the vector tau.
tau must be of length N.
This function constructs the orthogonal matrix
from the
information stored in the Hessenberg matrix H along with the
vector tau. H and tau are outputs from
gsl_linalg_hessenberg.
This function is similar to gsl_linalg_hessenberg_unpack, except
it accumulates the matrix U into V, so that
.
The matrix V must be initialized prior to calling this function.
Setting V to the identity matrix provides the same result as
gsl_linalg_hessenberg_unpack. If H is order N, then
V must have N columns but may have any number of rows.
This function sets the lower triangular portion of H, below
the subdiagonal, to zero. It is useful for clearing out the
Householder vectors after calling gsl_linalg_hessenberg.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A general matrix
can be factorized by similarity
transformations into the form,
where
and
are orthogonal matrices and
is a
-by-
bidiagonal matrix with non-zero entries only on the
diagonal and superdiagonal. The size of U is
-by-
and the size of V is
-by-
.
This function factorizes the
-by-
matrix A into
bidiagonal form
. The diagonal and superdiagonal of the
matrix
are stored in the diagonal and superdiagonal of A.
The orthogonal matrices
and V are stored as compressed
Householder vectors in the remaining elements of A. The
Householder coefficients are stored in the vectors tau_U and
tau_V. The length of tau_U must equal the number of
elements in the diagonal of A and the length of tau_V should
be one element shorter.
This function unpacks the bidiagonal decomposition of A given by
gsl_linalg_bidiag_decomp, (A, tau_U, tau_V)
into the separate orthogonal matrices U, V and the diagonal
vector diag and superdiagonal superdiag. Note that U
is stored as a compact
-by-
orthogonal matrix satisfying
for efficiency.
This function unpacks the bidiagonal decomposition of A given by
gsl_linalg_bidiag_decomp, (A, tau_U, tau_V)
into the separate orthogonal matrices U, V and the diagonal
vector diag and superdiagonal superdiag. The matrix U
is stored in-place in A.
This function unpacks the diagonal and superdiagonal of the bidiagonal
decomposition of A given by gsl_linalg_bidiag_decomp, into
the diagonal vector diag and superdiagonal vector superdiag.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A Householder transformation is a rank-1 modification of the identity
matrix which can be used to zero out selected elements of a vector. A
Householder matrix
takes the form,
where
is a vector (called the Householder vector) and
. The functions described in this section use the
rank-1 structure of the Householder matrix to create and apply
Householder transformations efficiently.
This function prepares a Householder transformation
which can be used to zero all the elements of the input vector except
the first. On output the transformation is stored in the vector v
and the scalar
is returned.
This function applies the Householder matrix
defined by the
scalar tau and the vector v to the left-hand side of the
matrix A. On output the result
is stored in A.
This function applies the Householder matrix
defined by the
scalar tau and the vector v to the right-hand side of the
matrix A. On output the result
is stored in A.
This function applies the Householder transformation
defined by
the scalar tau and the vector v to the vector w. On
output the result
is stored in w.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This function solves the system
directly using
Householder transformations. On output the solution is stored in x
and b is not modified. The matrix A is destroyed by the
Householder transformations.
This function solves the system
in-place using
Householder transformations. On input x should contain the
right-hand side
, which is replaced by the solution on output. The
matrix A is destroyed by the Householder transformations.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This function solves the general
-by-
system
where A is tridiagonal (
). The super-diagonal and
sub-diagonal vectors e and f must be one element shorter
than the diagonal vector diag. The form of A for the 4-by-4
case is shown below,
This function solves the general
-by-
system
where A is symmetric tridiagonal (
). The off-diagonal vector
e must be one element shorter than the diagonal vector diag.
The form of A for the 4-by-4 case is shown below,
The current implementation uses a variant of Cholesky decomposition
which can cause division by zero if the matrix is not positive definite.
This function solves the general
-by-
system
where A is cyclic tridiagonal (
). The cyclic super-diagonal and
sub-diagonal vectors e and f must have the same number of
elements as the diagonal vector diag. The form of A for the
4-by-4 case is shown below,
This function solves the general
-by-
system
where A is symmetric cyclic tridiagonal (
). The cyclic
off-diagonal vector e must have the same number of elements as the
diagonal vector diag. The form of A for the 4-by-4 case is
shown below,
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The process of balancing a matrix applies similarity transformations
to make the rows and columns have comparable norms. This is
useful, for example, to reduce roundoff errors in the solution
of eigenvalue problems. Balancing a matrix
consists
of replacing
with a similar matrix
where
is a diagonal matrix whose entries are powers
of the floating point radix.
This function replaces the matrix A with its balanced counterpart and stores the diagonal elements of the similarity transformation into the vector D.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The following program solves the linear system
. The
system to be solved is,
and the solution is found using LU decomposition of the matrix
.
|
Here is the output from the program,
x = -4.05205 -12.6056 1.66091 8.69377 |
This can be verified by multiplying the solution
by the
original matrix
using GNU OCTAVE,
octave> A = [ 0.18, 0.60, 0.57, 0.96;
0.41, 0.24, 0.99, 0.58;
0.14, 0.30, 0.97, 0.66;
0.51, 0.13, 0.19, 0.85 ];
octave> x = [ -4.05205; -12.6056; 1.66091; 8.69377];
octave> A * x
ans =
1.0000
2.0000
3.0000
4.0000
|
This reproduces the original right-hand side vector,
, in
accordance with the equation
.
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Further information on the algorithms described in this section can be found in the following book,
The LAPACK library is described in the following manual,
The LAPACK source code can be found at the website above, along with an online copy of the users guide.
The Modified Golub-Reinsch algorithm is described in the following paper,
The Jacobi algorithm for singular value decomposition is described in the following papers,
lawns or
lawnspdf directories.
| [ << ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This document was generated by Autobuild on September, 26 2007 using texi2html 1.78.