"WolframBatch" (Batch Computation Provider)
Details
Environment Properties
Job Settings
| "RemoteMachineClass" | "Basic1x8" | type of computing resource to use |
| "Basic1x8" | basic machine class with 1 CPU core and 8 GB of memory | |
| "Basic2x8" | 2 CPU cores and 8 GB of memory | |
| "Basic4x16" | 4 CPU cores and 16 GB of memory | |
| "Memory8x64" | memory-optimized class with 8 CPU cores and 64 GB of memory | |
| "Memory16x128" | 16 CPU cores and 128 GB of memory | |
| "Memory192x1536" | 192 CPU cores and 1536 GB of memory | |
| "Compute64x128" | compute-optimized class with 64 CPU cores and 128 GB of memory | |
| "Compute192x384" | 192 CPU cores and 384 GB of memory | |
| "GPU1xL40S" | 1 NVIDIA L40S GPU with 44 GiB of GPU memory | |
| "GPU4xL4" | 4 NVIDIA L4 GPUs with 89 GiB of total GPU memory |
Job Notifications
| "Email" | sends an email to the address specified by $WolframID | |
| "SMS" | sends an SMS text message to $MobilePhone |
| "JobStarting" | job is no longer queued and is preparing to start | |
| "JobStarted" | job has started running | |
| "JobCompleted" | job has succeeded or terminated | |
| "JobSucceeded" | job succeeded producing a result | |
| "JobTerminated" | job terminated abnormally | |
| "JobStatusChanged" | issued for all job status change events |
| "Hourly" | approximately every hour while job is running | |
| "Daily" | approximately every day while job is running | |
| {"HoursElapsed",n} | approximately every n hours since job creation |
| {"JobCreditsUsed",n} | every time approximately n credits have been consumed |
Job Statuses
| "Queued" | the job is waiting for compute resources to be available | |
| "Starting" | the job has been scheduled to an instance and its container image is being downloaded | |
| "Running" | the job's container has started | |
| "Succeeded" | the job's execution has succeeded and its output has been uploaded | |
| "Terminated" | the job has been aborted or stopped due to errors |
Job Properties
| "CreditsSpent" | number of Service Credits spent by the job | |
| "JobExitCode" | exit code returned by the kernel within the job container | |
| "JobLogData" | console log and timestamp data as a list | |
| "JobLogString" | console logs from the job container | |
| "JobLogTabular" | console log and timestamp data in Tabular format | |
| "JobStatusReason" | string describing the reason for which the job is in its current state | |
| "OutputPreviewImage" | image preview of the job's evaluation result |
| "CreditsSpent" | number of Service Credits spent by the job | |
| "JobStatusReason" | string describing the reason for which the job is in its current state | |
| "OutputPreviewImage" | image preview of the first child job's evaluation result |
| "JobLogString" | console logs from the job container |
Examples
open all close allBasic Examples (1)
The default remote batch environment is "WolframBatch":
Submit a job using the default "WolframBatch" environment. RemoteBatchSubmit returns a RemoteBatchJobObject that allows you to manage and view the state of the job:
Job Settings (3)
Properties & Relations (8)
Run a batch job on a computer with a large memory capacity:
Run a batch job with a given service credit budget:
Run a batch job with a given maximum runtime:
Receive SMS text notifications for job state change events:
Receive text notifications when the job starts and finishes, and emails every 2 hours while it runs:
Do not produce any notifications:
Use a default set of notifications:
Verify the detailed specification used for the default notifications:
Receive a notification when a job either succeeds or terminates:
Possible Issues (4)
Jobs that exceed their memory limit are terminated:
Neural network operations use the CPU by default, even if a job’s machine class has GPU hardware available:
Specify the TargetDevice option in order to use a GPU for training or inference:
NetTrain[…,TargetDevice"GPU"] uses a single GPU, regardless of the number of GPUs available to the job:
Specify TargetDevice{"GPU",All} in order to use all available GPUs for training:
Multi-GPU inference is not supported:
Multi-GPU training fails if a batch size is not specified:
Specify the BatchSize option when performing training on multiple GPUs:
Tech Notes
History
Introduced in 2025 (14.3)