SlideShare ist ein Scribd-Unternehmen logo
1 von 21
DETECTION OF
ABANDONED OBJECTS
I N C R O W D E D E N V I R O N M E N T S
INTRODUCTION
• Visual surveillance systems today consist of a large number of cameras, usually monitored
by a relatively small team of human operators.
• Recent studies have shown that the average human can focus on tracking the movements
of up to four dynamic targets simultaneously, and can efficiently detect changes to the
attended targets but not the neighboring distractors.
• When targets and distractors are too close, it becomes difficult to individuate the targets
and maintain tracking efficiently.
• Further, according to the classical spotlight theory of visual attention, people can attend to
only one region of space (i.e. area in view) at a time, or at most, two.
• Simply stated, the human visual processing capability and attentiveness required for the
effective monitoring of crowded scenes or multiple screens within a surveillance system is
limited.
PROPOSED ALGORITHM
4 Sub-events
Algorithm
1. Detection of unattended bag.
2. Reverse traversal through
previous frames to discover the
likely owner.
COMPUTATIONAL MODULE
I. Detection of Unattended Baggage
• The goal of the first module of the algorithm is the detection of any stationary baggage. Until such an
event occurs, it is unnecessary to track and monitor all ongoing activities in the scene. Doing so not
only cuts computational costs but also avoids ambiguities born of inaccuracies in tracking in the
presence of much movement and occlusion.
• The representation of bags is established using typical shape and size characteristics. The classifier is
trained off-line, using the following features:
• Compactness – the ratio of area to squared perimeter (multiplied by 4π for normalization)
• Solidity ratio – the extent to which the blob area covers the convex hull area
• Eccentricity – the ratio of major axis to minor axis of an ellipse that envelopes the blob
• To ensure that the bag remains stationary while left alone as well as to reinforce the decision of the
classifier, each suspect blob is tracked over a number of consecutive frames (usually, around 10) to
check for the consistency of detection and position, before declaring it as unattended and moving on
to look for its potential owner(s).
CURRENT APPROACH: BLOB
ANALYSIS SYSTEM
• Extract a region of interest (ROI), thus eliminating video areas that are unlikely to
contain abandoned objects.
• Perform video segmentation using background subtraction.
• Track objects based on their area and centroid statistics.
• Visualize the results.
EXTRACT A REGION OF INTEREST
(ROI)
• It is defined as roi(x y width height)x y are image that has to be focused(portion of
image that has to be performed)
PERFORM VIDEO SEGMENTATION USING
BACKGROUND SUBTRACTION
• Create a Color Space Converter System object to convert the RGB image to Y'CbCr
format.
• Create Threshold scale factor.
• Create a Morphological Close System object to fill in small gaps in the detected
objects.
Track objects based on their area and centroid statistics.
TRACK OBJECTS BASED ON THEIR AREA
AND CENTROID STATISTICS.
VISUALIZE THE RESULTS
CODES• roi = [100 80 360 240]; % defining region of interest roi(x y width height)x y are image that has to be focused(portion of image that has to be performed)
• % Maximum number of objects to track
• maxNumObj = 200;
• % Number of frames that an object must remain stationary before an alarm is
• % raised
• alarmCount = 45;
• % Maximum number of frames that an abandoned object can be hidden before it
• % is no longer tracked
• maxConsecutiveMiss = 4;
• areaChangeFraction = 20; % Maximum allowable change in object area in percent
• centroidChangeFraction = 30; % Maximum allowable change in object centroid in percent
• % Minimum ratio between the number of frames in which an object is detected
• % and the total number of frames, for that object to be tracked.
• minPersistenceRatio = 0.3;
• % Offsets for drawing bounding boxes in original input video
• PtsOffset = int32(repmat([roi(1), roi(2), 0, 0],[maxNumObj 1]));%Convert to 32-bit signed integer, repmat: Repeat copies of a matrix
• %%
• % Create a VideoFileReader System object to read video from a file.
• hVideoSrc = vision.VideoFileReader;
• hVideoSrc.Filename = 'Abandoned_Bag1.mp4';
• hVideoSrc.VideoOutputDataType = 'single';
• %%
• % Create a ColorSpaceConverter System object to convert the RGB image to
• % Y'CbCr format.
• hColorConv = vision.ColorSpaceConverter('Conversion', 'RGB to YCbCr'); %Y?CbCr color space is a mathematical coordinate transformation from an associated RGB color
space.
• %%
• % Create a ColorSpaceConverter System object to convert the RGB image to
• % Y'CbCr format.
• hAutothreshold = vision.Autothresholder('ThresholdScaleFactor', 1.3);
• %%
• % Create a MorphologicalClose System object to fill in small gaps in the detected objects.
• hClosing = vision.MorphologicalClose('Neighborhood', strel('square',5));
• %%
• % Create a BlobAnalysis System object to find the area, centroid, and bounding
• % box of the objects in the video.
• hBlob = vision.BlobAnalysis('MaximumCount', maxNumObj, 'ExcludeBorderBlobs', true);
• hBlob.MinimumBlobArea = 100;
• hBlob.MaximumBlobArea = 2500;
• %%
• % Create System objects to display results.
• pos = [10 300 roi(3)+25 roi(4)+25];
• hAbandonedObjects = vision.VideoPlayer('Name', 'Abandoned Objects', 'Position', pos);
• pos(1) = 46+roi(3); % move the next viewer to the right
• hAllObjects = vision.VideoPlayer('Name', 'All Objects', 'Position', pos);
• pos = [80+2*roi(3) 300 roi(3)-roi(1)+25 roi(4)-roi(2)+25];
• hThresholdDisplay = vision.VideoPlayer('Name', 'Threshold', 'Position', pos);
• %% Video Processing Loop
• % Create a processing loop to perform abandoned object detection on the input
• % video. This loop uses the System objects you instantiated above.
• firsttime = true;
• while ~isDone(hVideoSrc)
• Im = step(hVideoSrc);
•
• % Select the region of interest from the original video
• OutIm = Im(roi(2):end, roi(1):end, :);
• YCbCr = step(hColorConv, OutIm);
• CbCr = complex(YCbCr(:,:,2), YCbCr(:,:,3));
• % Store the first video frame as the background
• if firsttime
• firsttime = false;
• BkgY = YCbCr(:,:,1);
• BkgCbCr = CbCr;
• end
• SegY = step(hAutothreshold, abs(YCbCr(:,:,1)-BkgY));
• SegCbCr = abs(CbCr-BkgCbCr) > 0.05;
•
• % Fill in small gaps in the detected objects
• Segmented = step(hClosing, SegY | SegCbCr);
• % Perform blob analysis
• [Area, Centroid, BBox] = step(hBlob, Segmented);
• % Call the helper function that tracks the identified objects and
• % returns the bounding boxes and the number of the abandoned objects.
• [OutCount, OutBBox] = videoobjtracker(Area, Centroid, BBox, maxNumObj, ...
• areaChangeFraction, centroidChangeFraction, maxConsecutiveMiss, ...
• minPersistenceRatio, alarmCount);
• % Display the abandoned object detection results
• Imr = insertShape(Im,'FilledRectangle',OutBBox+PtsOffset,...
• 'Color','red','Opacity',0.5);
• % Call the helper function that tracks the identified objects and
• % returns the bounding boxes and the number of the abandoned objects.
• [OutCount, OutBBox] = videoobjtracker(Area, Centroid, BBox, maxNumObj, ...
• areaChangeFraction, centroidChangeFraction, maxConsecutiveMiss, ...
• minPersistenceRatio, alarmCount);
• % Display the abandoned object detection results
• Imr = insertShape(Im,'FilledRectangle',OutBBox+PtsOffset,...
• 'Color','red','Opacity',0.5);
• % insert number of abandoned objects in the frame
• Imr = insertText(Imr, [1 1], OutCount);
• step(hAbandonedObjects, Imr);
• BlobCount = size(BBox,1);
• BBoxOffset = BBox + int32(repmat([roi(1) roi(2) 0 0],[BlobCount 1]));
• Imr = insertShape(Im,'Rectangle',BBoxOffset,'Color','green');
• % Display all the detected objects
• % insert number of all objects in the frame
• Imr = insertText(Imr, [1 1], OutCount);
• Imr = insertShape(Imr,'Rectangle',roi);
• %Imr = step(hDrawBBox, Imr, roi);
• step(hAllObjects, Imr);
• % Display the segmented video
• SegBBox = PtsOffset;
• SegBBox(1:BlobCount,:) = BBox;
• SegIm = insertShape(double(repmat(Segmented,[1 1 3])),'Rectangle', SegBBox,'Color','green');
• %SegIm = step(hDrawRectangles3, repmat(Segmented,[1 1 3]), SegBBox);
• step(hThresholdDisplay, SegIm);
• end
release(hVideoSrc);
• h=msgbox('The object has been detected!')
OUTPUT
REFERENCES
• Research paper of Medha Bhargava, Chia-Chih Chen, M. S. Ryoo, and J. K. Aggarwal
from University of Austin.
• Research paper “Multiple Object Tracking” by C. Sears and Z. Pylyshyn
• Research paper by J. Martinez-del-Rincon, J. Elías Herrero, Jorge Jómez and Carlos
Orrite Uruñuela, “Automatic Left Luggage Detection and Tracking using Multi-Camera.
• Anoop Mathew.
THANK YOU
M A D E BY : A R U S H I C H A U D H R Y
A N D
S A U M YA T I WA R I

Weitere ähnliche Inhalte

Was ist angesagt?

Was ist angesagt? (20)

Moving object detection
Moving object detectionMoving object detection
Moving object detection
 
You Only Look Once: Unified, Real-Time Object Detection
You Only Look Once: Unified, Real-Time Object DetectionYou Only Look Once: Unified, Real-Time Object Detection
You Only Look Once: Unified, Real-Time Object Detection
 
Canny Edge Detection
Canny Edge DetectionCanny Edge Detection
Canny Edge Detection
 
PR-207: YOLOv3: An Incremental Improvement
PR-207: YOLOv3: An Incremental ImprovementPR-207: YOLOv3: An Incremental Improvement
PR-207: YOLOv3: An Incremental Improvement
 
PR-132: SSD: Single Shot MultiBox Detector
PR-132: SSD: Single Shot MultiBox DetectorPR-132: SSD: Single Shot MultiBox Detector
PR-132: SSD: Single Shot MultiBox Detector
 
Object detection
Object detectionObject detection
Object detection
 
Object detection presentation
Object detection presentationObject detection presentation
Object detection presentation
 
Object Pose Estimation
Object Pose EstimationObject Pose Estimation
Object Pose Estimation
 
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSING
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSINGBRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSING
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSING
 
Object Detection Using R-CNN Deep Learning Framework
Object Detection Using R-CNN Deep Learning FrameworkObject Detection Using R-CNN Deep Learning Framework
Object Detection Using R-CNN Deep Learning Framework
 
Video Multi-Object Tracking using Deep Learning
Video Multi-Object Tracking using Deep LearningVideo Multi-Object Tracking using Deep Learning
Video Multi-Object Tracking using Deep Learning
 
Moving Object Detection And Tracking Using CNN
Moving Object Detection And Tracking Using CNNMoving Object Detection And Tracking Using CNN
Moving Object Detection And Tracking Using CNN
 
Mask R-CNN
Mask R-CNNMask R-CNN
Mask R-CNN
 
Edge Detection algorithm and code
Edge Detection algorithm and codeEdge Detection algorithm and code
Edge Detection algorithm and code
 
Recent Progress on Object Detection_20170331
Recent Progress on Object Detection_20170331Recent Progress on Object Detection_20170331
Recent Progress on Object Detection_20170331
 
Object detection
Object detectionObject detection
Object detection
 
Faster R-CNN - PR012
Faster R-CNN - PR012Faster R-CNN - PR012
Faster R-CNN - PR012
 
Real Time Object Dectection using machine learning
Real Time Object Dectection using machine learningReal Time Object Dectection using machine learning
Real Time Object Dectection using machine learning
 
multiple object tracking using particle filter
multiple object tracking using particle filtermultiple object tracking using particle filter
multiple object tracking using particle filter
 
Mask-RCNN for Instance Segmentation
Mask-RCNN for Instance SegmentationMask-RCNN for Instance Segmentation
Mask-RCNN for Instance Segmentation
 

Ähnlich wie Detection of Abandoned Bag

Sem 2 Presentation
Sem 2 PresentationSem 2 Presentation
Sem 2 Presentation
Shalom Cohen
 

Ähnlich wie Detection of Abandoned Bag (20)

People counting in low density video sequences2
People counting in low density video sequences2People counting in low density video sequences2
People counting in low density video sequences2
 
Object detection and Instance Segmentation
Object detection and Instance SegmentationObject detection and Instance Segmentation
Object detection and Instance Segmentation
 
OpenCV.pdf
OpenCV.pdfOpenCV.pdf
OpenCV.pdf
 
20220811 - computer vision
20220811 - computer vision20220811 - computer vision
20220811 - computer vision
 
Various object detection and tracking methods
Various object detection and tracking methodsVarious object detection and tracking methods
Various object detection and tracking methods
 
Development of wearable object detection system & blind stick for visuall...
Development of wearable object detection system & blind stick for visuall...Development of wearable object detection system & blind stick for visuall...
Development of wearable object detection system & blind stick for visuall...
 
iPhone dev intro
iPhone dev introiPhone dev intro
iPhone dev intro
 
Beginning to iPhone development
Beginning to iPhone developmentBeginning to iPhone development
Beginning to iPhone development
 
Camshaft
CamshaftCamshaft
Camshaft
 
Video Object Segmentation in Videos
Video Object Segmentation in VideosVideo Object Segmentation in Videos
Video Object Segmentation in Videos
 
Information from pixels
Information from pixelsInformation from pixels
Information from pixels
 
Sem 2 Presentation
Sem 2 PresentationSem 2 Presentation
Sem 2 Presentation
 
Memory Corruption: from sandbox to SMM
Memory Corruption: from sandbox to SMMMemory Corruption: from sandbox to SMM
Memory Corruption: from sandbox to SMM
 
Real-time Computer Vision With Ruby - OSCON 2008
Real-time Computer Vision With Ruby - OSCON 2008Real-time Computer Vision With Ruby - OSCON 2008
Real-time Computer Vision With Ruby - OSCON 2008
 
Chapter ii(coding)
Chapter ii(coding)Chapter ii(coding)
Chapter ii(coding)
 
Volumetric Lighting for Many Lights in Lords of the Fallen
Volumetric Lighting for Many Lights in Lords of the FallenVolumetric Lighting for Many Lights in Lords of the Fallen
Volumetric Lighting for Many Lights in Lords of the Fallen
 
Smart home security using Telegram chatbot
Smart home security using Telegram chatbotSmart home security using Telegram chatbot
Smart home security using Telegram chatbot
 
Active Learning in Recommender Systems
Active Learning in Recommender SystemsActive Learning in Recommender Systems
Active Learning in Recommender Systems
 
Autonomous Tello Drone ASU
Autonomous Tello Drone ASUAutonomous Tello Drone ASU
Autonomous Tello Drone ASU
 
Lath
LathLath
Lath
 

Mehr von Saumya Tiwari

Mehr von Saumya Tiwari (11)

APPLICATION OF SIGNAL PROCESSING IN RADIO ASTRONOMY SYSTEMS
APPLICATION OF SIGNAL PROCESSING IN RADIO ASTRONOMY SYSTEMSAPPLICATION OF SIGNAL PROCESSING IN RADIO ASTRONOMY SYSTEMS
APPLICATION OF SIGNAL PROCESSING IN RADIO ASTRONOMY SYSTEMS
 
Voice Control Home Automation
Voice Control Home AutomationVoice Control Home Automation
Voice Control Home Automation
 
Reflection, Scaling, Shear, Translation, and Rotation
Reflection, Scaling, Shear, Translation, and RotationReflection, Scaling, Shear, Translation, and Rotation
Reflection, Scaling, Shear, Translation, and Rotation
 
Case Study: LED
Case Study: LEDCase Study: LED
Case Study: LED
 
IoT Based Fire Alarm and Monitoring System
IoT Based Fire Alarm and Monitoring SystemIoT Based Fire Alarm and Monitoring System
IoT Based Fire Alarm and Monitoring System
 
Anti bag snatching alarm
Anti bag snatching alarmAnti bag snatching alarm
Anti bag snatching alarm
 
Cloning
CloningCloning
Cloning
 
Isotherm project
Isotherm projectIsotherm project
Isotherm project
 
Queue less report
Queue less reportQueue less report
Queue less report
 
Climbing Cleaning Robot
Climbing Cleaning Robot Climbing Cleaning Robot
Climbing Cleaning Robot
 
Modes of entry
Modes of entryModes of entry
Modes of entry
 

Kürzlich hochgeladen

introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
VishalKumarJha10
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
mohitmore19
 
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesAI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
VictorSzoltysek
 

Kürzlich hochgeladen (20)

Exploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdfExploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdf
 
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
 
VTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnVTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learn
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) SolutionIntroducing Microsoft’s new Enterprise Work Management (EWM) Solution
Introducing Microsoft’s new Enterprise Work Management (EWM) Solution
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
 
10 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 202410 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 2024
 
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS LiveVip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTV
 
Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
 
Diamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionDiamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with Precision
 
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM TechniquesAI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
AI Mastery 201: Elevating Your Workflow with Advanced LLM Techniques
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 

Detection of Abandoned Bag

  • 1. DETECTION OF ABANDONED OBJECTS I N C R O W D E D E N V I R O N M E N T S
  • 2. INTRODUCTION • Visual surveillance systems today consist of a large number of cameras, usually monitored by a relatively small team of human operators. • Recent studies have shown that the average human can focus on tracking the movements of up to four dynamic targets simultaneously, and can efficiently detect changes to the attended targets but not the neighboring distractors. • When targets and distractors are too close, it becomes difficult to individuate the targets and maintain tracking efficiently. • Further, according to the classical spotlight theory of visual attention, people can attend to only one region of space (i.e. area in view) at a time, or at most, two. • Simply stated, the human visual processing capability and attentiveness required for the effective monitoring of crowded scenes or multiple screens within a surveillance system is limited.
  • 3. PROPOSED ALGORITHM 4 Sub-events Algorithm 1. Detection of unattended bag. 2. Reverse traversal through previous frames to discover the likely owner.
  • 4. COMPUTATIONAL MODULE I. Detection of Unattended Baggage • The goal of the first module of the algorithm is the detection of any stationary baggage. Until such an event occurs, it is unnecessary to track and monitor all ongoing activities in the scene. Doing so not only cuts computational costs but also avoids ambiguities born of inaccuracies in tracking in the presence of much movement and occlusion. • The representation of bags is established using typical shape and size characteristics. The classifier is trained off-line, using the following features: • Compactness – the ratio of area to squared perimeter (multiplied by 4π for normalization) • Solidity ratio – the extent to which the blob area covers the convex hull area • Eccentricity – the ratio of major axis to minor axis of an ellipse that envelopes the blob • To ensure that the bag remains stationary while left alone as well as to reinforce the decision of the classifier, each suspect blob is tracked over a number of consecutive frames (usually, around 10) to check for the consistency of detection and position, before declaring it as unattended and moving on to look for its potential owner(s).
  • 5.
  • 6. CURRENT APPROACH: BLOB ANALYSIS SYSTEM • Extract a region of interest (ROI), thus eliminating video areas that are unlikely to contain abandoned objects. • Perform video segmentation using background subtraction. • Track objects based on their area and centroid statistics. • Visualize the results.
  • 7. EXTRACT A REGION OF INTEREST (ROI) • It is defined as roi(x y width height)x y are image that has to be focused(portion of image that has to be performed)
  • 8. PERFORM VIDEO SEGMENTATION USING BACKGROUND SUBTRACTION • Create a Color Space Converter System object to convert the RGB image to Y'CbCr format. • Create Threshold scale factor. • Create a Morphological Close System object to fill in small gaps in the detected objects. Track objects based on their area and centroid statistics.
  • 9.
  • 10.
  • 11. TRACK OBJECTS BASED ON THEIR AREA AND CENTROID STATISTICS.
  • 13. CODES• roi = [100 80 360 240]; % defining region of interest roi(x y width height)x y are image that has to be focused(portion of image that has to be performed) • % Maximum number of objects to track • maxNumObj = 200; • % Number of frames that an object must remain stationary before an alarm is • % raised • alarmCount = 45; • % Maximum number of frames that an abandoned object can be hidden before it • % is no longer tracked • maxConsecutiveMiss = 4; • areaChangeFraction = 20; % Maximum allowable change in object area in percent • centroidChangeFraction = 30; % Maximum allowable change in object centroid in percent • % Minimum ratio between the number of frames in which an object is detected • % and the total number of frames, for that object to be tracked. • minPersistenceRatio = 0.3; • % Offsets for drawing bounding boxes in original input video • PtsOffset = int32(repmat([roi(1), roi(2), 0, 0],[maxNumObj 1]));%Convert to 32-bit signed integer, repmat: Repeat copies of a matrix • %% • % Create a VideoFileReader System object to read video from a file. • hVideoSrc = vision.VideoFileReader; • hVideoSrc.Filename = 'Abandoned_Bag1.mp4'; • hVideoSrc.VideoOutputDataType = 'single';
  • 14. • %% • % Create a ColorSpaceConverter System object to convert the RGB image to • % Y'CbCr format. • hColorConv = vision.ColorSpaceConverter('Conversion', 'RGB to YCbCr'); %Y?CbCr color space is a mathematical coordinate transformation from an associated RGB color space. • %% • % Create a ColorSpaceConverter System object to convert the RGB image to • % Y'CbCr format. • hAutothreshold = vision.Autothresholder('ThresholdScaleFactor', 1.3); • %% • % Create a MorphologicalClose System object to fill in small gaps in the detected objects. • hClosing = vision.MorphologicalClose('Neighborhood', strel('square',5)); • %% • % Create a BlobAnalysis System object to find the area, centroid, and bounding • % box of the objects in the video. • hBlob = vision.BlobAnalysis('MaximumCount', maxNumObj, 'ExcludeBorderBlobs', true); • hBlob.MinimumBlobArea = 100; • hBlob.MaximumBlobArea = 2500; • %% • % Create System objects to display results. • pos = [10 300 roi(3)+25 roi(4)+25]; • hAbandonedObjects = vision.VideoPlayer('Name', 'Abandoned Objects', 'Position', pos); • pos(1) = 46+roi(3); % move the next viewer to the right • hAllObjects = vision.VideoPlayer('Name', 'All Objects', 'Position', pos); • pos = [80+2*roi(3) 300 roi(3)-roi(1)+25 roi(4)-roi(2)+25]; • hThresholdDisplay = vision.VideoPlayer('Name', 'Threshold', 'Position', pos);
  • 15. • %% Video Processing Loop • % Create a processing loop to perform abandoned object detection on the input • % video. This loop uses the System objects you instantiated above. • firsttime = true; • while ~isDone(hVideoSrc) • Im = step(hVideoSrc); • • % Select the region of interest from the original video • OutIm = Im(roi(2):end, roi(1):end, :); • YCbCr = step(hColorConv, OutIm); • CbCr = complex(YCbCr(:,:,2), YCbCr(:,:,3)); • % Store the first video frame as the background • if firsttime • firsttime = false; • BkgY = YCbCr(:,:,1); • BkgCbCr = CbCr; • end • SegY = step(hAutothreshold, abs(YCbCr(:,:,1)-BkgY)); • SegCbCr = abs(CbCr-BkgCbCr) > 0.05;
  • 16. • • % Fill in small gaps in the detected objects • Segmented = step(hClosing, SegY | SegCbCr); • % Perform blob analysis • [Area, Centroid, BBox] = step(hBlob, Segmented); • % Call the helper function that tracks the identified objects and • % returns the bounding boxes and the number of the abandoned objects. • [OutCount, OutBBox] = videoobjtracker(Area, Centroid, BBox, maxNumObj, ... • areaChangeFraction, centroidChangeFraction, maxConsecutiveMiss, ... • minPersistenceRatio, alarmCount); • % Display the abandoned object detection results • Imr = insertShape(Im,'FilledRectangle',OutBBox+PtsOffset,... • 'Color','red','Opacity',0.5); • % Call the helper function that tracks the identified objects and • % returns the bounding boxes and the number of the abandoned objects. • [OutCount, OutBBox] = videoobjtracker(Area, Centroid, BBox, maxNumObj, ... • areaChangeFraction, centroidChangeFraction, maxConsecutiveMiss, ... • minPersistenceRatio, alarmCount);
  • 17. • % Display the abandoned object detection results • Imr = insertShape(Im,'FilledRectangle',OutBBox+PtsOffset,... • 'Color','red','Opacity',0.5); • % insert number of abandoned objects in the frame • Imr = insertText(Imr, [1 1], OutCount); • step(hAbandonedObjects, Imr); • BlobCount = size(BBox,1); • BBoxOffset = BBox + int32(repmat([roi(1) roi(2) 0 0],[BlobCount 1])); • Imr = insertShape(Im,'Rectangle',BBoxOffset,'Color','green'); • % Display all the detected objects • % insert number of all objects in the frame • Imr = insertText(Imr, [1 1], OutCount); • Imr = insertShape(Imr,'Rectangle',roi); • %Imr = step(hDrawBBox, Imr, roi); • step(hAllObjects, Imr); • % Display the segmented video • SegBBox = PtsOffset; • SegBBox(1:BlobCount,:) = BBox; • SegIm = insertShape(double(repmat(Segmented,[1 1 3])),'Rectangle', SegBBox,'Color','green'); • %SegIm = step(hDrawRectangles3, repmat(Segmented,[1 1 3]), SegBBox); • step(hThresholdDisplay, SegIm); • end release(hVideoSrc); • h=msgbox('The object has been detected!')
  • 19.
  • 20. REFERENCES • Research paper of Medha Bhargava, Chia-Chih Chen, M. S. Ryoo, and J. K. Aggarwal from University of Austin. • Research paper “Multiple Object Tracking” by C. Sears and Z. Pylyshyn • Research paper by J. Martinez-del-Rincon, J. Elías Herrero, Jorge Jómez and Carlos Orrite Uruñuela, “Automatic Left Luggage Detection and Tracking using Multi-Camera. • Anoop Mathew.
  • 21. THANK YOU M A D E BY : A R U S H I C H A U D H R Y A N D S A U M YA T I WA R I