VideoObjectTracking
VideoObjectTracking[video]
detects objects of interest in video and tracks them over video frames.
VideoObjectTracking[objects]
corresponds to and tracks objects, assuming they are from video frames.
VideoObjectTracking[…detector]
uses detector to find objects of interest in the input.
更多信息和选项
- VideoObjectTracking, also known as object tracking, tracks unique objects in frames of a video, if possible trying to handle occlusions. Tracked objects are also known as tracklets.
- Tracking could automatically detect objects in frames or be performed on a precomputed set of objects.
- The result is as an association with time keys and a list of tracked objects.
- Possible settings for objects and their corresponding outputs are:
-
{{pos11,pos12,…},…} tracking points as kposij {{bbox11,bbox12,…},…} tracking boxes as kbboxij {label1{bbox11,bbox12,…},…,…} tracking boxes as {labeli,j}bbox {lmat1,…} relabeling segments in label matrices lmati {t1obj1,…} a list of times and objects - By default, objects are detected using ImageBoundingBoxes. Possible settings for detector include:
-
f a detector function that returns supported objects "concept" named concept, as used in "Concept" entities "word" English word, as used in WordData wordspec word sense specification, as used in WordData Entity[…] any appropriate entity category1category2… any of the categoryi - Using VideoObjectTracking[{image1,image2,…}] is similar to tracking objects across frames of a video.
- The following options can be given:
-
Method Automatic tracking method to use TargetDevice Automatic the target device on which to perform detection - The possible values for the Method option are:
-
"OCSort" observation-centric SORT (simple, online, real-time) tracking; predicts object trajectories using Kalman estimators "RunningBuffer" offline method, associates objects by comparing a buffer of frames - When tracking label matrices, occlusions are not handled. They can be tracked with Method"RunningBuffer".
- With Method->{"OCSort",subopt}, the following suboptions can be specified:
-
"IOUThreshold" 0.2 intersection over union threshold between bounding boxes "OcclusionThreshold" 8 number of frames for which history of a tracklet is maintained before expiration "OCMWeight" 0.2 observation-centric motion weight that accounts for the directionality of moving bounding boxes "ORUHistory" 3 length of tracklet history to step back for tracklet re-update - With Method->{"RunningBuffer",subopt}, the following suboptions can be specified:
-
"MaxCentroidDistance" Automatic maximum distance between the centroids for adjacent frames "OcclusionThreshold" 8 number of frames for which the history of a tracklet is maintained before expiration - Additional "RunningBuffer" suboptions to specify the contribution to the cost matrix are:
-
"CentroidWeight" 0.5 centroid distance between components or bounding boxes "OverlapWeight" 1 overlap of components or bounding boxes "SizeWeight" Automatic size of components or bounding boxes
范例
打开所有单元关闭所有单元Scope (5)
Objects (3)
Applications (10)
Basic Uses (2)
Detect and track objects in a video:
Highlight objects on the video; notice all are labeled with their detected classes:
Highlight tracked detected objects with their corresponding indices:
Track labeled components from matrices:
Define a segmentation function that works on each frame:
Segment frames and show the individual components:
Track the components across frames and show tracked components:
Count Objects (3)
Extract Tracked Objects (1)
Visualize Motion Trajectories (1)
文本
Wolfram Research (2025),VideoObjectTracking,Wolfram 语言函数,https://reference.wolfram.com/language/ref/VideoObjectTracking.html.
CMS
Wolfram 语言. 2025. "VideoObjectTracking." Wolfram 语言与系统参考资料中心. Wolfram Research. https://reference.wolfram.com/language/ref/VideoObjectTracking.html.
APA
Wolfram 语言. (2025). VideoObjectTracking. Wolfram 语言与系统参考资料中心. 追溯自 https://reference.wolfram.com/language/ref/VideoObjectTracking.html 年