2. INTRODUCTION
• Visual surveillance systems today consist of a large number of cameras, usually monitored
by a relatively small team of human operators.
• Recent studies have shown that the average human can focus on tracking the movements
of up to four dynamic targets simultaneously, and can efficiently detect changes to the
attended targets but not the neighboring distractors.
• When targets and distractors are too close, it becomes difficult to individuate the targets
and maintain tracking efficiently.
• Further, according to the classical spotlight theory of visual attention, people can attend to
only one region of space (i.e. area in view) at a time, or at most, two.
• Simply stated, the human visual processing capability and attentiveness required for the
effective monitoring of crowded scenes or multiple screens within a surveillance system is
limited.
4. COMPUTATIONAL MODULE
I. Detection of Unattended Baggage
• The goal of the first module of the algorithm is the detection of any stationary baggage. Until such an
event occurs, it is unnecessary to track and monitor all ongoing activities in the scene. Doing so not
only cuts computational costs but also avoids ambiguities born of inaccuracies in tracking in the
presence of much movement and occlusion.
• The representation of bags is established using typical shape and size characteristics. The classifier is
trained off-line, using the following features:
• Compactness – the ratio of area to squared perimeter (multiplied by 4π for normalization)
• Solidity ratio – the extent to which the blob area covers the convex hull area
• Eccentricity – the ratio of major axis to minor axis of an ellipse that envelopes the blob
• To ensure that the bag remains stationary while left alone as well as to reinforce the decision of the
classifier, each suspect blob is tracked over a number of consecutive frames (usually, around 10) to
check for the consistency of detection and position, before declaring it as unattended and moving on
to look for its potential owner(s).
5.
6. CURRENT APPROACH: BLOB
ANALYSIS SYSTEM
• Extract a region of interest (ROI), thus eliminating video areas that are unlikely to
contain abandoned objects.
• Perform video segmentation using background subtraction.
• Track objects based on their area and centroid statistics.
• Visualize the results.
7. EXTRACT A REGION OF INTEREST
(ROI)
• It is defined as roi(x y width height)x y are image that has to be focused(portion of
image that has to be performed)
8. PERFORM VIDEO SEGMENTATION USING
BACKGROUND SUBTRACTION
• Create a Color Space Converter System object to convert the RGB image to Y'CbCr
format.
• Create Threshold scale factor.
• Create a Morphological Close System object to fill in small gaps in the detected
objects.
Track objects based on their area and centroid statistics.
13. CODES• roi = [100 80 360 240]; % defining region of interest roi(x y width height)x y are image that has to be focused(portion of image that has to be performed)
• % Maximum number of objects to track
• maxNumObj = 200;
• % Number of frames that an object must remain stationary before an alarm is
• % raised
• alarmCount = 45;
• % Maximum number of frames that an abandoned object can be hidden before it
• % is no longer tracked
• maxConsecutiveMiss = 4;
• areaChangeFraction = 20; % Maximum allowable change in object area in percent
• centroidChangeFraction = 30; % Maximum allowable change in object centroid in percent
• % Minimum ratio between the number of frames in which an object is detected
• % and the total number of frames, for that object to be tracked.
• minPersistenceRatio = 0.3;
• % Offsets for drawing bounding boxes in original input video
• PtsOffset = int32(repmat([roi(1), roi(2), 0, 0],[maxNumObj 1]));%Convert to 32-bit signed integer, repmat: Repeat copies of a matrix
• %%
• % Create a VideoFileReader System object to read video from a file.
• hVideoSrc = vision.VideoFileReader;
• hVideoSrc.Filename = 'Abandoned_Bag1.mp4';
• hVideoSrc.VideoOutputDataType = 'single';
14. • %%
• % Create a ColorSpaceConverter System object to convert the RGB image to
• % Y'CbCr format.
• hColorConv = vision.ColorSpaceConverter('Conversion', 'RGB to YCbCr'); %Y?CbCr color space is a mathematical coordinate transformation from an associated RGB color
space.
• %%
• % Create a ColorSpaceConverter System object to convert the RGB image to
• % Y'CbCr format.
• hAutothreshold = vision.Autothresholder('ThresholdScaleFactor', 1.3);
• %%
• % Create a MorphologicalClose System object to fill in small gaps in the detected objects.
• hClosing = vision.MorphologicalClose('Neighborhood', strel('square',5));
• %%
• % Create a BlobAnalysis System object to find the area, centroid, and bounding
• % box of the objects in the video.
• hBlob = vision.BlobAnalysis('MaximumCount', maxNumObj, 'ExcludeBorderBlobs', true);
• hBlob.MinimumBlobArea = 100;
• hBlob.MaximumBlobArea = 2500;
• %%
• % Create System objects to display results.
• pos = [10 300 roi(3)+25 roi(4)+25];
• hAbandonedObjects = vision.VideoPlayer('Name', 'Abandoned Objects', 'Position', pos);
• pos(1) = 46+roi(3); % move the next viewer to the right
• hAllObjects = vision.VideoPlayer('Name', 'All Objects', 'Position', pos);
• pos = [80+2*roi(3) 300 roi(3)-roi(1)+25 roi(4)-roi(2)+25];
• hThresholdDisplay = vision.VideoPlayer('Name', 'Threshold', 'Position', pos);
15. • %% Video Processing Loop
• % Create a processing loop to perform abandoned object detection on the input
• % video. This loop uses the System objects you instantiated above.
• firsttime = true;
• while ~isDone(hVideoSrc)
• Im = step(hVideoSrc);
•
• % Select the region of interest from the original video
• OutIm = Im(roi(2):end, roi(1):end, :);
• YCbCr = step(hColorConv, OutIm);
• CbCr = complex(YCbCr(:,:,2), YCbCr(:,:,3));
• % Store the first video frame as the background
• if firsttime
• firsttime = false;
• BkgY = YCbCr(:,:,1);
• BkgCbCr = CbCr;
• end
• SegY = step(hAutothreshold, abs(YCbCr(:,:,1)-BkgY));
• SegCbCr = abs(CbCr-BkgCbCr) > 0.05;
16. •
• % Fill in small gaps in the detected objects
• Segmented = step(hClosing, SegY | SegCbCr);
• % Perform blob analysis
• [Area, Centroid, BBox] = step(hBlob, Segmented);
• % Call the helper function that tracks the identified objects and
• % returns the bounding boxes and the number of the abandoned objects.
• [OutCount, OutBBox] = videoobjtracker(Area, Centroid, BBox, maxNumObj, ...
• areaChangeFraction, centroidChangeFraction, maxConsecutiveMiss, ...
• minPersistenceRatio, alarmCount);
• % Display the abandoned object detection results
• Imr = insertShape(Im,'FilledRectangle',OutBBox+PtsOffset,...
• 'Color','red','Opacity',0.5);
• % Call the helper function that tracks the identified objects and
• % returns the bounding boxes and the number of the abandoned objects.
• [OutCount, OutBBox] = videoobjtracker(Area, Centroid, BBox, maxNumObj, ...
• areaChangeFraction, centroidChangeFraction, maxConsecutiveMiss, ...
• minPersistenceRatio, alarmCount);
17. • % Display the abandoned object detection results
• Imr = insertShape(Im,'FilledRectangle',OutBBox+PtsOffset,...
• 'Color','red','Opacity',0.5);
• % insert number of abandoned objects in the frame
• Imr = insertText(Imr, [1 1], OutCount);
• step(hAbandonedObjects, Imr);
• BlobCount = size(BBox,1);
• BBoxOffset = BBox + int32(repmat([roi(1) roi(2) 0 0],[BlobCount 1]));
• Imr = insertShape(Im,'Rectangle',BBoxOffset,'Color','green');
• % Display all the detected objects
• % insert number of all objects in the frame
• Imr = insertText(Imr, [1 1], OutCount);
• Imr = insertShape(Imr,'Rectangle',roi);
• %Imr = step(hDrawBBox, Imr, roi);
• step(hAllObjects, Imr);
• % Display the segmented video
• SegBBox = PtsOffset;
• SegBBox(1:BlobCount,:) = BBox;
• SegIm = insertShape(double(repmat(Segmented,[1 1 3])),'Rectangle', SegBBox,'Color','green');
• %SegIm = step(hDrawRectangles3, repmat(Segmented,[1 1 3]), SegBBox);
• step(hThresholdDisplay, SegIm);
• end
release(hVideoSrc);
• h=msgbox('The object has been detected!')
20. REFERENCES
• Research paper of Medha Bhargava, Chia-Chih Chen, M. S. Ryoo, and J. K. Aggarwal
from University of Austin.
• Research paper “Multiple Object Tracking” by C. Sears and Z. Pylyshyn
• Research paper by J. Martinez-del-Rincon, J. Elías Herrero, Jorge Jómez and Carlos
Orrite Uruñuela, “Automatic Left Luggage Detection and Tracking using Multi-Camera.
• Anoop Mathew.
21. THANK YOU
M A D E BY : A R U S H I C H A U D H R Y
A N D
S A U M YA T I WA R I