applies f to partial video and audio data corresponding to one frame of video, returning a new video.


applies f to data corresponding to overlapping partitions of n video frames.


applies f to partitions with offset d.


applies f to a list of inputs extracted from each videoi.

Details and Options

  • VideoMap operates on video and audio partitions extracted from a Video object.
  • Using VideoMap[f,video,n], the partition slides by one image frame.
  • The function f can be any of the following:
  • fvimage function to apply to all video tracks
    <|"Image"fi,"Audio"fa|>functions to apply to video and audio tracks
  • Each of fi and fa can be one of the following:
  • Identitycopy the track over
    fan arbitrary function f
  • Each of fi and fa can take the following arguments:
  • #Imagevideo frames as Image objects
    #Audioa chunk of the audio as an Audio object
    #Timetime from the beginning of the video
    #TimeIntervalbeginning and end time stamps for the current partition
    #FrameIndexindex of the current output frame
    #InputFrameIndexindex of the current input frame
  • In VideoMap[f,{video1,video2,},], data provided to each of the arguments is a list where the ^(th) element corresponds to the data extracted from videoi.
  • For multi-track video objects, data from the first video or audio track is fed to the function.
  • The result of fi can be a single Image object or a list of them, resulting in single or multiple video tracks. Similarly, fa can return a single Audio object or a list of Audio objects.
  • The time variables dur and offset can be given as a scalar in seconds, or a time or sample Quantity object.
  • To process partitions in parallel, use Parallelize[VideoMap[]].
  • By default, VideoMap places the new video under the "Video" directory in $WolframDocumentsDirectory.
  • VideoMap supports video containers and codecs specified by $VideoEncoders and $VideoDecoders.
  • The following options can be given:
  • AlignmentAutomaticalignment of the time stamps with partitions
    AudioEncodingAutomaticaudio encoding to use
    CompressionLevelAutomaticcompression level to use
    FrameRate Automaticthe frame rate to use
    GeneratedAssetFormatAutomaticthe format of the result
    GeneratedAssetLocation$GeneratedAssetLocationthe location of the result
    OverwriteTargetFalsewhether to overwrite an existing file
    SubtitleEncodingAutomaticsubtitle encoding to use
    VideoEncodingAutomaticvideo encoding to use
    VideoTransparencyFalsewhether the output video should have a transparency channel


open allclose all

Basic Examples  (3)

Process frames of a video:

Process video frames using time-varying arguments:

Blend frames from two videos:

Scope  (8)

Function Specification  (5)

The function f receives an Association holding data for each partition:

Check the keys of the provided association:

Process individual video frames:

By default, only a video track is generated:

Specify functions to generate a video and an audio track:

Use the Identity function to copy over a track without processing:

Return a list of images from the function to generate multiple video tracks:

Generate multiple audio tracks:

Use Nothing to indicate that no data should be written for a particular evaluation. Drop frames with mean intensity smaller than a threshold:

Return an association from the function to explicitly specify to which track the data belongs:

Partition Specification  (3)

Specify a partition size corresponding to four frames:

Specify a partition size using a time Quantity:

By default, an offset of one frame is used:

Use an offset of four frames:

Specify an offset using a time Quantity:

Specify an offset proportional to the partition size by a Scaled amount:

Process buffers of images from multiple videos:

Options  (2)

FrameRate  (2)

The FrameRate option specifies the frame rate of the resulting video:

By default, the frame rate of the original video is preserved:

When an offset is specified, the frame rate is adjusted proportionally in order to maintain a playback speed similar to the input. Sample every sixth frame:

Specify the frame rate to use:

The specified frame rate only affects the output video, and not the times sent in the association to the function:

Applications  (6)

Perform a time-varying image transformation:

Use audio data to process video frames:

Create a mosaic effect using a mosaic size increasing with time:

Add the audio spectrogram on the bottom of each frame:

Incorporate precomputed data in the generation of the new frames:

Use the external time series data to modify the video track:

Generate multiple tracks from the four corners of a video:

Inspect the result:

Show the first frame of each track:

Properties & Relations  (1)

Properties of the output video are typically inferred from the output of the function:

Generate a video file with different properties than the input video:

Possible Issues  (2)

The image function should produce images with consistent dimensions:

The audio function should produce audio objects with consistent properties:

Wolfram Research (2020), VideoMap, Wolfram Language function, (updated 2022).


Wolfram Research (2020), VideoMap, Wolfram Language function, (updated 2022).


Wolfram Language. 2020. "VideoMap." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2022.


Wolfram Language. (2020). VideoMap. Wolfram Language & System Documentation Center. Retrieved from


@misc{reference.wolfram_2024_videomap, author="Wolfram Research", title="{VideoMap}", year="2022", howpublished="\url{}", note=[Accessed: 19-July-2024 ]}


@online{reference.wolfram_2024_videomap, organization={Wolfram Research}, title={VideoMap}, year={2022}, url={}, note=[Accessed: 19-July-2024 ]}