skcuda.rlinalg.rsvd

skcuda.rlinalg.rsvd(a_gpu, k=None, p=0, q=0, method='standard', handle=None)[source]

Randomized Singular Value Decomposition.

Randomized algorithm for computing the approximate low-rank singular value decomposition of a rectangular (m, n) matrix a with target rank k << n. The input matrix a is factored as a = U * diag(s) * Vt. The right singluar vectors are the columns of the real or complex unitary matrix U. The left singular vectors are the columns of the real or complex unitary matrix V. The singular values s are non-negative and real numbers.

The paramter p is a oversampling parameter to improve the approximation. A value between 2 and 10 is recommended.

The paramter q specifies the number of normlized power iterations (subspace iterations) to reduce the approximation error. This is recommended if the the singular values decay slowly and in practice 1 or 2 iterations achive good results. However, computing power iterations is increasing the computational time.

If k > (n/1.5), partial SVD or trancated SVD might be faster.

Parameters:
  • a_gpu (pycuda.gpuarray.GPUArray) – Real/complex input matrix a with dimensions (m, n).
  • k (int) – k is the target rank of the low-rank decomposition, k << min(m,n).
  • p (int) – p sets the oversampling parameter (default k=0).
  • q (int) – q sets the number of power iterations (default=0).
  • method ({‘standard’, ‘fast’}) – ‘standard’ : Standard algorithm as described in [1, 2] ‘fast’ : Version II algorithm as described in [2]
  • handle (int) – CUBLAS context. If no context is specified, the default handle from skcuda.misc._global_cublas_handle is used.
Returns:

  • u_gpu (pycuda.gpuarray) – Right singular values, array of shape (m, k).
  • s_gpu (pycuda.gpuarray) – Singular values, 1-d array of length k.
  • vt_gpu (pycuda.gpuarray) – Left singular values, array of shape (k, n).

Notes

Double precision is only supported if the standard version of the CULA Dense toolkit is installed.

This function destroys the contents of the input matrix.

Arrays are assumed to be stored in column-major order, i.e., order=’F’.

Input matrix of shape (m, n), where n>m is not supported yet.

References

N. Halko, P. Martinsson, and J. Tropp. “Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions” (2009). (available at arXiv).

S. Voronin and P.Martinsson. “RSVDPACK: Subroutines for computing partial singular value decompositions via randomized sampling on single core, multi core, and GPU architectures” (2015). (available at arXiv).

Examples

>>> import pycuda.gpuarray as gpuarray
>>> import pycuda.autoinit
>>> import numpy as np
>>> from skcuda import linalg, rlinalg
>>> linalg.init()
>>> rlinalg.init()
>>> #Randomized SVD decomposition of the square matrix `a` with single precision.
>>> #Note: There is no gain to use rsvd if k > int(n/1.5)
>>> a = np.array(np.random.randn(5, 5), np.float32, order='F')
>>> a_gpu = gpuarray.to_gpu(a)
>>> U, s, Vt = rlinalg.rsvd(a_gpu, k=5, method='standard')
>>> np.allclose(a, np.dot(U.get(), np.dot(np.diag(s.get()), Vt.get())), 1e-4)
True
>>> #Low-rank SVD decomposition with target rank k=2
>>> a = np.array(np.random.randn(5, 5), np.float32, order='F')
>>> a_gpu = gpuarray.to_gpu(a)
>>> U, s, Vt = rlinalg.rsvd(a_gpu, k=2, method='standard')