VideoTimeSeries

VideoTimeSeries[f,video]

applies f to each frame of the Video object video, returning a time series.

VideoTimeSeries[f,video,n]

applies f to non-overlapping partitions of n video frames.

VideoTimeSeries[f,video,n,d]

applies f to partitions with offset d.

Details and Options

  • VideoTimeSeries can be used to detect temporal or spatial events in videos, such as object detection, motion detection or activity recognition.
  • VideoTimeSeries returns a TimeSeries whose values are the results of f applied to each video frame or partition of video frames. The times are the timestamps of the corresponding frame or partition.
  • Frame variables n and d can be given as a scalar specifying the number of frames or a time Quantity object.
  • VideoTimeSeries supports video containers and codecs specified by $VideoDecoders.
  • The following options can be given:
  • AlignmentCenteralignment of the timestamps with partitions
    MetaInformationNoneinclude additional metainformation
    MissingDataMethodNonemethod to use for missing values
    ResamplingMethod"Interpolation"the method to use for resampling paths

Examples

open allclose all

Basic Examples  (2)

Compute the mean of the RGB colors for every video frame and plot them:

Compute image distance between consecutive frames:

Plot the result, showing times with significant scene change:

Applications  (2)

Find portions of a video with constant images:

Define a function to detect whether an image has constant pixel values:

Apply the function to each frame and plot the result:

Count the number of cars in each frame:

Possible Issues  (1)

When the function returns a list, all lists should have similar dimensions:

Pad or trim the resulting lists to the same size to store them in the TimeSeries:

Results may also be wrapped into other containers before being stored in a TimeSeries:

Introduced in 2020
 (12.1)