skcuda.linalg.add_dot

skcuda.linalg.add_dot(a_gpu, b_gpu, c_gpu, transa='N', transb='N', alpha=1.0, beta=1.0, handle=None)[source]

Calculates the dot product of two arrays and adds it to a third matrix.

In essence, this computes

C = alpha * (A B) + beta * C

For 2D arrays of shapes (m, k) and (k, n), it computes the matrix product; the result has shape (m, n).

Parameters:
  • a_gpu (pycuda.gpuarray.GPUArray) – Input array.
  • b_gpu (pycuda.gpuarray.GPUArray) – Input array.
  • c_gpu (pycuda.gpuarray.GPUArray) – Cumulative array.
  • transa (char) – If ‘T’, compute the product of the transpose of a_gpu. If ‘C’, compute the product of the Hermitian of a_gpu.
  • transb (char) – If ‘T’, compute the product of the transpose of b_gpu. If ‘C’, compute the product of the Hermitian of b_gpu.
  • handle (int (optional)) – CUBLAS context. If no context is specified, the default handle from skcuda.misc._global_cublas_handle is used.
Returns:

c_gpu

Return type:

pycuda.gpuarray.GPUArray

Notes

The matrices must all contain elements of the same data type.