CUDALink`
CUDALink`

CUDAMemoryAllocate

CUDAMemoryAllocate[type,dim]

gives CUDAMemory with specified type and single dimension.

CUDAMemoryAllocate[type,{dim1,dim2,}]

gives CUDAMemory with specified type and dimensions.

更多信息和选项

  • The CUDALink application must be loaded using Needs["CUDALink`"].
  • Possible types for CUDAMemoryAllocate are:
  • IntegerRealComplex
    "Byte""Bit16""Integer32"
    "Byte[2]""Bit16[2]""Integer32[2]"
    "Byte[3]""Bit16[3]""Integer32[3]"
    "Byte[4]""Bit16[4]""Integer32[4]"
    "UnsignedByte""UnsignedBit16""UnsignedInteger"
    "UnsignedByte[2]""UnsignedBit16[2]""UnsignedInteger[2]"
    "UnsignedByte[3]""UnsignedBit16[3]""UnsignedInteger[3]"
    "UnsignedByte[4]""UnsignedBit16[4]""UnsignedInteger[4]"
    "Double""Float""Integer64"
    "Double[2]""Float[2]""Integer64[2]"
    "Double[3]""Float[3]""Integer64[3]"
    "Double[4]""Float[4]""Integer64[4]"
  • The following options can be given:
  • "Device"$CUDADeviceCUDA device used in computation
    "TargetPrecision"Automaticprecision used in computation

范例

打开所有单元关闭所有单元

基本范例  (4)

First, load the CUDALink application:

This allocates a rank 3 tensor with each dimension 10:

Information about memory can be retrieved via CUDAMemoryInformation:

This unloads the memory:

For a single dimension, the length can be an integer:

Link CUDAMemoryLoad; different types are supported:

Adding memory as Real or Complex gets the type based on whether the device supports double precision or not:

In this case, the CUDA device has double-precision support:

The behavior can be forced to change by setting the "TargetPrecision":

Applications  (1)

This sets all elements in a list to 0:

This allocates the required memory:

This loads the function:

This runs the function:

This shows information about the memory; note that the "DeviceStatus" is "Synchronized":

This gets the memory from the GPU:

This shows information about the memory; note that the "DeviceStatus" and "HostStatus" are "Synchronized":

Possible Issues  (1)

Getting memory from the GPU for unset allocated memory returns random results:

Wolfram Research (2010),CUDAMemoryAllocate,Wolfram 语言函数,https://reference.wolfram.com/language/CUDALink/ref/CUDAMemoryAllocate.html.

文本

Wolfram Research (2010),CUDAMemoryAllocate,Wolfram 语言函数,https://reference.wolfram.com/language/CUDALink/ref/CUDAMemoryAllocate.html.

CMS

Wolfram 语言. 2010. "CUDAMemoryAllocate." Wolfram 语言与系统参考资料中心. Wolfram Research. https://reference.wolfram.com/language/CUDALink/ref/CUDAMemoryAllocate.html.

APA

Wolfram 语言. (2010). CUDAMemoryAllocate. Wolfram 语言与系统参考资料中心. 追溯自 https://reference.wolfram.com/language/CUDALink/ref/CUDAMemoryAllocate.html 年

BibTeX

@misc{reference.wolfram_2024_cudamemoryallocate, author="Wolfram Research", title="{CUDAMemoryAllocate}", year="2010", howpublished="\url{https://reference.wolfram.com/language/CUDALink/ref/CUDAMemoryAllocate.html}", note=[Accessed: 23-November-2024 ]}

BibLaTeX

@online{reference.wolfram_2024_cudamemoryallocate, organization={Wolfram Research}, title={CUDAMemoryAllocate}, year={2010}, url={https://reference.wolfram.com/language/CUDALink/ref/CUDAMemoryAllocate.html}, note=[Accessed: 23-November-2024 ]}