VideoObjectTracking

VideoObjectTracking[video]

detects objects of interest in video and tracks them over video frames.

VideoObjectTracking[objects]

corresponds to and tracks objects, assuming they are from video frames.

VideoObjectTracking[detector]

uses detector to find objects of interest in the input.

更多信息和选项

  • VideoObjectTracking, also known as object tracking, tracks unique objects in frames of a video, if possible trying to handle occlusions. Tracked objects are also known as tracklets.
  • Tracking could automatically detect objects in frames or be performed on a precomputed set of objects.
  • The result is as an association with time keys and a list of tracked objects.
  • Possible settings for objects and their corresponding outputs are:
  • {{pos11,pos12,},}tracking points as kposij
    {{bbox11,bbox12,},}tracking boxes as kbboxij
    {label1{bbox11,bbox12,},,}tracking boxes as {labeli,j}bbox
    {lmat1,}relabeling segments in label matrices lmati
    {t1obj1,}a list of times and objects
  • By default, objects are detected using ImageBoundingBoxes. Possible settings for detector include:
  • fa detector function that returns supported objects
    "concept"named concept, as used in "Concept" entities
    "word"English word, as used in WordData
    wordspecword sense specification, as used in WordData
    Entity[]any appropriate entity
    category1|category2|any of the categoryi
  • Using VideoObjectTracking[{image1,image2,}] is similar to tracking objects across frames of a video.
  • The following options can be given:
  • MethodAutomatictracking method to use
    TargetDeviceAutomaticthe target device on which to perform detection
  • The possible values for the Method option are:
  • "OCSort"observation-centric SORT (simple, online, real-time) tracking; predicts object trajectories using Kalman estimators
    "RunningBuffer"offline method, associates objects by comparing a buffer of frames
  • When tracking label matrices, occlusions are not handled. They can be tracked with Method"RunningBuffer".
  • With Method->{"OCSort",subopt}, the following suboptions can be specified:
  • "IOUThreshold"0.2intersection over union threshold between bounding boxes
    "OcclusionThreshold"8number of frames for which history of a tracklet is maintained before expiration
    "OCMWeight"0.2observation-centric motion weight that accounts for the directionality of moving bounding boxes
    "ORUHistory"3length of tracklet history to step back for tracklet re-update
  • With Method->{"RunningBuffer",subopt}, the following suboptions can be specified:
  • "MaxCentroidDistance"Automaticmaximum distance between the centroids for adjacent frames
    "OcclusionThreshold"8number of frames for which the history of a tracklet is maintained before expiration
  • Additional "RunningBuffer" suboptions to specify the contribution to the cost matrix are:
  • "CentroidWeight"0.5centroid distance between components or bounding boxes
    "OverlapWeight"1overlap of components or bounding boxes
    "SizeWeight"Automaticsize of components or bounding boxes

范例

打开所有单元关闭所有单元

基本范例  (1)

Detect and track objects in a video:

Scope  (5)

Objects  (3)

Detect and track objects in a video:

Detect and track objects in a list of images:

Track a list of bounding boxes:

Object Detectors  (2)

Automatically detect objects and track them:

Specify a detector function to find objects:

Specify the category of object to detect and track:

Detect and track faces in a video:

Applications  (10)

Basic Uses  (2)

Detect and track objects in a video:

Highlight objects on the video; notice all are labeled with their detected classes:

Track the detected objects:

Highlight tracked detected objects with their corresponding indices:

Track labeled components from matrices:

Define a segmentation function that works on each frame:

Segment frames and show the individual components:

Track the components across frames and show tracked components:

Count Objects  (3)

Count the number of detected objects in a video:

Track objects and find unique instances:

Get the final counts:

Count occurrences of a specific object:

Track objects and find unique instances:

Get the final counts:

Count the number of elephants in a video:

Extract Tracked Objects  (1)

Detect and track the contents of a video:

Extract the first of the detected labels:

Extract the sub-video corresponding to the first tracked object:

Visualize Motion Trajectories  (1)

Track pedestrians in a railway station:

Detect the bounding boxes and show them over the original video:

Track the boxes:

Plot the trajectories of the centroids of the boxes:

Overlay the trajectories onto the original video:

Analyze Wildlife Videos  (3)

Track a herd of migrating elephants:

Highlight frames with the tracked elephants:

Track a herd of galloping horses:

Track a flock of sheep entering a barn:

Wolfram Research (2025),VideoObjectTracking,Wolfram 语言函数,https://reference.wolfram.com/language/ref/VideoObjectTracking.html.

文本

Wolfram Research (2025),VideoObjectTracking,Wolfram 语言函数,https://reference.wolfram.com/language/ref/VideoObjectTracking.html.

CMS

Wolfram 语言. 2025. "VideoObjectTracking." Wolfram 语言与系统参考资料中心. Wolfram Research. https://reference.wolfram.com/language/ref/VideoObjectTracking.html.

APA

Wolfram 语言. (2025). VideoObjectTracking. Wolfram 语言与系统参考资料中心. 追溯自 https://reference.wolfram.com/language/ref/VideoObjectTracking.html 年

BibTeX

@misc{reference.wolfram_2025_videoobjecttracking, author="Wolfram Research", title="{VideoObjectTracking}", year="2025", howpublished="\url{https://reference.wolfram.com/language/ref/VideoObjectTracking.html}", note=[Accessed: 15-January-2025 ]}

BibLaTeX

@online{reference.wolfram_2025_videoobjecttracking, organization={Wolfram Research}, title={VideoObjectTracking}, year={2025}, url={https://reference.wolfram.com/language/ref/VideoObjectTracking.html}, note=[Accessed: 15-January-2025 ]}