Enum LinearAlgebraOperation
- Namespace
- DotCompute.Algorithms
- Assembly
- DotCompute.Algorithms.dll
Linear algebra operations supported by DotCompute kernel library.
public enum LinearAlgebraOperation
Fields
CholeskyDecomposition = 7Cholesky decomposition for symmetric positive-definite matrices.
Computes L where A = LL^T for symmetric positive-definite A. Twice as efficient as LU decomposition for this matrix class.
Requirements: Matrix must be symmetric and positive-definite
Applications: Linear least squares, optimization, Monte Carlo
Numerical Stability: Highly stable for well-conditioned matrices
EigenDecomposition = 9Eigenvalue decomposition for diagonalizable matrices.
Computes A = VΛV^(-1) where Λ is diagonal matrix of eigenvalues, V contains corresponding eigenvectors. Critical for spectral analysis.
Algorithm: QR algorithm with Hessenberg reduction
Complexity: O(N³) with iterative refinement
Applications: Stability analysis, quantum mechanics, vibration modes
HouseholderTransform = 2Householder transformation application to matrices.
Applies Householder reflector H = I - 2vv^T to matrix A. Critical operation in QR decomposition and eigenvalue algorithms.
HouseholderVector = 1Householder reflection vector computation for QR decomposition.
Computes Householder vector v where H = I - 2vv^T reflects input vector x onto a coordinate axis. Used as building block for QR decomposition.
Numerical Stability: Designed to avoid catastrophic cancellation
JacobiSVD = 3Jacobi Singular Value Decomposition (SVD).
Computes SVD A = UΣV^T using Jacobi iterations. Provides high accuracy singular values and vectors through iterative refinement.
Convergence: Quadratic convergence with proper pivoting
Accuracy: Machine precision singular values
Use Cases: Numerical analysis, principal component analysis, signal processing
LUDecomposition = 8LU decomposition with partial pivoting.
Computes PA = LU where P is permutation matrix, L is lower triangular, U is upper triangular. General-purpose factorization for linear systems.
Pivoting: Partial pivoting for numerical stability
Use Cases: Solving linear systems, matrix inversion, determinants
MatrixMultiply = 0Matrix-matrix multiplication (GEMM - General Matrix Multiply).
Computes C = αAB + βC where A is M×K, B is K×N, C is M×N. Optimized implementations use tiled algorithms with shared memory.
Performance: Up to 4 TFLOPS on modern GPUs
Kernels: Tiled, Strassen's algorithm for large matrices
MatrixVector = 4Matrix-vector multiplication (GEMV).
Computes y = αAx + βy where A is M×N matrix, x is N-vector, y is M-vector. Optimized for different matrix layouts (row-major, column-major).
Performance: Memory-bandwidth bound, benefits from coalesced access
ParallelReduction = 5Parallel reduction operations (sum, max, min, etc.).
Efficient parallel reduction using tree-based algorithms in shared memory. Used as primitive for norms, dot products, and statistical operations.
Algorithm: Binary tree reduction with warp-level primitives
Performance: O(log N) depth, minimal synchronization overhead
QRAlgorithm = 6QR algorithm for eigenvalue computation.
Iterative QR factorization algorithm for computing eigenvalues and eigenvectors of general matrices. Uses Householder or Givens rotations.
Convergence: Linear convergence, accelerated with shifts
Preprocessing: Hessenberg reduction for efficiency
Remarks
Enumerates GPU-accelerated linear algebra operations with optimized kernels for CUDA, OpenCL, and Metal backends. Each operation has specialized implementations for different matrix types and sizes.
Used by LinearAlgebraKernelLibrary to select appropriate kernel implementations based on operation type and hardware capabilities.