Magnum::Math::Algorithms namespace

Algorithms.

Contents

Various matrix and vector algorithms.

This library is built as part of Magnum by default. To use this library with CMake, you need to find the Magnum package and link to the Magnum::Magnum target:

find_package(Magnum REQUIRED)

# ...
target_link_libraries(your-app Magnum::Magnum)

See Downloading and building and Usage with CMake for more information.

Functions

template<std::size_t size, std::size_t rows, class T>
auto gaussJordanInPlaceTransposed(RectangularMatrix<size, size, T>& a, RectangularMatrix<size, rows, T>& t) -> bool
In-place Gauss-Jordan elimination on transposed matrices.
template<std::size_t size, std::size_t cols, class T>
auto gaussJordanInPlace(RectangularMatrix<size, size, T>& a, RectangularMatrix<cols, size, T>& t) -> bool
In-place Gauss-Jordan elimination.
template<std::size_t size, class T>
auto gaussJordanInverted(Matrix<size, T> matrix) -> Matrix<size, T>
Gauss-Jordan matrix inversion.
template<std::size_t cols, std::size_t rows, class T>
void gramSchmidtOrthogonalizeInPlace(RectangularMatrix<cols, rows, T>& matrix)
In-place Gram-Schmidt matrix orthogonalization.
template<std::size_t cols, std::size_t rows, class T>
auto gramSchmidtOrthogonalize(RectangularMatrix<cols, rows, T> matrix) -> RectangularMatrix<cols, rows, T>
Gram-Schmidt matrix orthogonalization.
template<std::size_t cols, std::size_t rows, class T>
void gramSchmidtOrthonormalizeInPlace(RectangularMatrix<cols, rows, T>& matrix)
In-place Gram-Schmidt matrix orthonormalization.
template<std::size_t cols, std::size_t rows, class T>
auto gramSchmidtOrthonormalize(RectangularMatrix<cols, rows, T> matrix) -> RectangularMatrix<cols, rows, T>
Gram-Schmidt matrix orthonormalization.
template<class Iterator, class T = typename std::decay<decltype(*std::declval<Iterator>())>::type>
auto kahanSum(Iterator begin, Iterator end, T sum = T(0), T* compensation = nullptr) -> T
Kahan summation algorithm.
template<std::size_t size, class T>
auto qr(const Matrix<size, T>& matrix) -> std::pair<Matrix<size, T>, Matrix<size, T>>
QR decomposition.
template<std::size_t cols, std::size_t rows, class T>
auto svd(RectangularMatrix<cols, rows, T> m) -> std::tuple<RectangularMatrix<cols, rows, T>, Vector<cols, T>, Matrix<cols, T>>
Singular Value Decomposition.

Function documentation

template<std::size_t size, std::size_t rows, class T>
bool Magnum::Math::Algorithms::gaussJordanInPlaceTransposed(RectangularMatrix<size, size, T>& a, RectangularMatrix<size, rows, T>& t)

In-place Gauss-Jordan elimination on transposed matrices.

Parameters
a Transposed left side of augmented matrix
t Transposed right side of augmented matrix
Returns True if a is regular, false if a is singular (and thus the system cannot be solved).

As Gauss-Jordan elimination works on rows and matrices in OpenGL are column-major, it is more efficient to operate on transposed matrices and treat columns as rows. See also gaussJordanInPlace() which works with non-transposed matrices.

The function eliminates matrix a and solves t in place. For efficiency reasons, only pure Gaussian elimination is done on a and the final backsubstitution is done only on t, as a would always end with identity matrix anyway.

Based on an ultra-compact Python code by Jarno Elonen, http://elonen.iki.fi/code/misc-notes/python-gaussj/index.html.

template<std::size_t size, std::size_t cols, class T>
bool Magnum::Math::Algorithms::gaussJordanInPlace(RectangularMatrix<size, size, T>& a, RectangularMatrix<cols, size, T>& t)

In-place Gauss-Jordan elimination.

Transposes the matrices, calls gaussJordanInPlaceTransposed() on them and then transposes them back.

template<std::size_t size, class T>
Matrix<size, T> Magnum::Math::Algorithms::gaussJordanInverted(Matrix<size, T> matrix)

Gauss-Jordan matrix inversion.

Since $ (\boldsymbol{A}^{-1})^T = (\boldsymbol{A}^T)^{-1} $ , passes matrix and an identity matrix to gaussJordanInPlaceTransposed(); returning the inverted matrix. Expects that the matrix is invertible.

template<std::size_t cols, std::size_t rows, class T>
void Magnum::Math::Algorithms::gramSchmidtOrthogonalizeInPlace(RectangularMatrix<cols, rows, T>& matrix)

In-place Gram-Schmidt matrix orthogonalization.

Parameters
matrix in/out Matrix to perform orthogonalization on

template<std::size_t cols, std::size_t rows, class T>
RectangularMatrix<cols, rows, T> Magnum::Math::Algorithms::gramSchmidtOrthogonalize(RectangularMatrix<cols, rows, T> matrix)

Gram-Schmidt matrix orthogonalization.

Unlike gramSchmidtOrthogonalizeInPlace() returns the modified matrix instead of performing the orthogonalization in-place.

template<std::size_t cols, std::size_t rows, class T>
void Magnum::Math::Algorithms::gramSchmidtOrthonormalizeInPlace(RectangularMatrix<cols, rows, T>& matrix)

In-place Gram-Schmidt matrix orthonormalization.

Parameters
matrix in/out Matrix to perform orthonormalization on

template<std::size_t cols, std::size_t rows, class T>
RectangularMatrix<cols, rows, T> Magnum::Math::Algorithms::gramSchmidtOrthonormalize(RectangularMatrix<cols, rows, T> matrix)

Gram-Schmidt matrix orthonormalization.

Unlike gramSchmidtOrthonormalizeInPlace() returns the modified matrix instead of performing the orthonormalization in-place.

template<class Iterator, class T = typename std::decay<decltype(*std::declval<Iterator>())>::type>
T Magnum::Math::Algorithms::kahanSum(Iterator begin, Iterator end, T sum = T(0), T* compensation = nullptr)

Kahan summation algorithm.

Parameters
begin in Range begin
end in Range end
sum in Initial value for the sum
compensation in/out Floating-point roundoff error compensation value. If non-nullptr, value behind the pointer is used as initial compensation value and the resulting value is

Calculates a sum of a large range of floating-point numbers with roundoff error compensation. Compared to for example std::accumulate() the algorithm significantly reduces numerical error in the total. See Wikipedia for an in-depth explanation: https://en.wikipedia.org/wiki/Kahan_summation_algorithm

Example with summation of a hundred million ones:

std::vector<Float> data(100000000, 1.0f);
Float a = std::accumulate(data.begin(), data.end());            // 1.667e7f
Float b = Math::Algorithms::kahanSum(data.begin(), data.end()); // 1.000e8f

If required, it is also possible to use this algorithm on non-contiguous ranges or single values (for example when calculating sum of pixel values in an image with some row padding or when the inputs are generated / converted from other values):

Containers::ArrayView<UnsignedByte> pixels;
Float sum = 0.0f, c = 0.0f;
for(UnsignedByte pixel: pixels) {
    Float value = Math::normalize<Float>(pixel);
    sum = Math::Algorithms::kahanSum(&value, &value + 1, sum, &c);
}

template<std::size_t size, class T>
std::pair<Matrix<size, T>, Matrix<size, T>> Magnum::Math::Algorithms::qr(const Matrix<size, T>& matrix)

QR decomposition.

Calculated using classic Gram-Schmidt process.

template<std::size_t cols, std::size_t rows, class T>
std::tuple<RectangularMatrix<cols, rows, T>, Vector<cols, T>, Matrix<cols, T>> Magnum::Math::Algorithms::svd(RectangularMatrix<cols, rows, T> m)

Singular Value Decomposition.

Performs Thin SVD on given matrix where rows >= cols:

\[ M = U \Sigma V^* \]

Returns first cols column vectors of $ U $ , diagonal of $ \Sigma $ and non-transposed $ V $ . If the solution doesn't converge, returns zero matrices.

Full $ U $ , $ \Sigma $ matrices and original $ M $ matrix can be reconstructed from the values as following:

Math::RectangularMatrix<cols, rows, Double> m;

Math::RectangularMatrix<cols, rows, Double> uPart;
Math::Vector<cols, Double> wDiagonal;
Math::Matrix<cols, Double> v;

std::tie(uPart, wDiagonal, v) = Math::Algorithms::svd(m);

// Extend U
Math::Matrix<rows, Double> u(Matrix<rows, Double>::Zero);
for(std::size_t i = 0; i != rows; ++i)
    u[i] = uPart[i];

// Diagonal W
Math::RectangularMatrix<cols, rows, Double> w =
    Math::RectangularMatrix<cols, rows, Double>::fromDiagonal(wDiagonal);

// u*w*v.transposed() == m

Implementation based on Golub, G. H.; Reinsch, C. (1970). "Singular value decomposition and least squares solutions".