SlideShare ist ein Scribd-Unternehmen logo
1 von 52
Downloaden Sie, um offline zu lesen
Fisheye/Omnidirectional View in
Autonomous Driving IV
Yu Huang
Outline
• FisheyeMultiNet: Real-time Multi-task Learning Architecture for
Surround-view Automated Parking System
• Generalized Object Detection on Fisheye Cameras for Autonomous
Driving: Dataset, Representations and Baseline
• SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for
Autonomous Driving
• Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors
for the ADAS
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
• Automated Parking is a low speed maneuvering scenario which is quite
unstructured and complex, requiring full 360° near-field sensing around the
vehicle.
• In this paper, discuss the design and implementation of an automated parking
system from the perspective of camera based deep learning algorithms.
• provide a holistic overview of an industrial system covering the embedded
system, use cases and the deep learning architecture.
• demonstrate a real-time multi-task deep learning network called
FisheyeMultiNet, which detects all the necessary objects for parking on a low-
power embedded system.
• FisheyeMultiNet runs at 15 fps for 4 cameras and it has three tasks namely object
detection, semantic segmentation and soiling detection.
• release a partial dataset of 5,000 images containing semantic segmentation and
bounding box detection ground truth via WoodScape project.
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
Classification of Parking scenarios - (a) Parallel Backward Parking (b) Perpendicular Backward Parking (c)
Perpendicular Forward Parking (d) Ambiguous Parking and (e) Fishbone Parking with roadmarkings.
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
Illustration of FisheyeMultiNet architecture comprising of object detection, semantic segmentation and soiling detection tasks.
FisheyeMultiNet: Real-time Multi-task Learning Architecture
for Surround-view Automated Parking System
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
• Object detection is a comprehensively studied problem in autonomous driving.
• However, it has been relatively less explored in the case of fisheye cameras.
• The standard bounding box fails in fisheye cameras due to the strong radial distortion,
particularly in the image’s periphery.
• explore better representations like oriented bounding box, ellipse, and generic polygon
for object detection in fisheye images in this work.
• use the IoU metric to compare these representations using accurate instance
segmentation ground truth.
• design a novel curved bounding box model that has optimal properties for fisheye
distortion models.
• also design a curvature adaptive perimeter sampling method for obtaining polygon
vertices, improving relative mAP score by 4.9% compared to uniform sampling.
• Overall, the proposed polygon model improves mIoU relative accuracy by 40.3%.
• The dataset comprising of 10,000 images along with ground truth will be made public.
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
Left: Illustration of fisheye distortion of projection of an open cube. A 4th-degree polynomial model radial
distortion. can visually notice that box matures to a curved box. Right: propose the Curved Bounding Box
using a circle with an arbitrary center and radius, as illustrated. It captures the radial distortion and obtains a
better footpoint. The center of the circle can be equivalently reparameterized using the object center (xˆ, yˆ).
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
Generic Polygon Representations. Left: Uniform angular sampling where the intersection of the polygon with the
radial line is represented by one parameter per point (r). Middle: Uniform contour sampling using L2 distance. It can
be parameterized in polar co-ordinates using 3 parameters (r, θ, α). α denotes the number of polygon vertices within
the sector, and it may be used to simplify the training. Alternatively, 2 parameters (x,y) can be used, as shown in the
figure on the right. Right: Variable step contour sampling. It is shown that the straight line in the bottom has less
number of points than curved points such as the wheel. This representation allows to maximize the utilization of
vertices according to local curvature.
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
FisheyeYOLO is an extension of YOLOv3 which
can output different output representation
Generalized Object Detection on Fisheye Cameras for
Autonomous Driving: Dataset, Representations and Baseline
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
• Four fisheye cameras with a 190° field of view cover the 360° around the vehicle.
• Due to its high radial distortion, the standard algorithms do not extend easily.
• In this work, release a synthetic version of the surround-view dataset, covering many of its
weaknesses and extending it.
• Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth.
• Secondly, WoodScape did not have all four cameras simultaneously in order to sample diverse
frames.
• However, this means that multi-camera algorithms cannot be designed, which is enabled in the
new dataset.
• implemented surround-view fisheye geometric projections in CARLA Simulator matching
WoodScape’s configuration and created SynWoodScape.
• release 80k images with annotations for 10+ tasks.
• also release the baseline code and supporting scripts.
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
Overview of Surround View cameras based multi-task
visual perception framework. The distance estimation task
(blue block) makes use of semantic guidance and dynamic
object masking from semantic/motion estimation (green
and blue haze block) and camera-geometry adaptive
convolutions (orange block). Additionally, guide the
detection decoder features (gray block) with the semantic
features. The encoder block (shown in the same color) is
common for all the tasks. The framework consists of
processing blocks to train the self-supervised distance
estimation (blue blocks) and semantic segmentation
(green blocks), motion segmentation (blue haze blocks),
and polygon-based fisheye object detection (gray blocks).
obtain Surround View geometric information by post-
processing the predicted distance maps in 3D space
(perano block). The camera tensor Ct (orange block) helps
OmniDet yield distance maps on multiple camera-
viewpoints and make the network camera independent.
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
SynWoodScape: Synthetic Surround-view Fisheye
Camera Dataset for Autonomous Driving
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
• This paper proposes a self-calibration method that can be applied for multiple larger
field-of-view (FOV) camera models on ADAS.
• Firstly, perform steps such as edge detection, length thresholding, and edge grouping for
the segregation of robust line candidates from the pool of initial distortion line segments.
• A straightness cost constraint with a cross-entropy loss was imposed on the selected line
candidates, thereby exploiting that loss to optimize the lens-distortion parameters using
the Levenberg–Marquardt (LM) optimization approach.
• The best-fit distortion parameters are used for the undistortion of an image frame,
thereby employing various high-end vision-based tasks on the distortion-rectified frame.
• investigation on experimental approaches such as parameter sharing between multiple
camera systems and model-specific empirical γ-residual rectification factor.
• The quantitative comparisons between the proposed method and traditional OpenCV
method on KITTI dataset with synthetically generated distortion ranges.
• a pragmatic approach of qualitative analysis has been conducted through streamlining
high-end vision-based tasks such as object detection, localization, and mapping, and
auto-parking on undistorted frames.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Proposed Pipeline on ADAS workbench (a) ADAS Platform: Camera sensors setup and image acquisition
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Proposed Pipeline on ADAS workbench (b) Proposed method with block schematics.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Structural anomaly induced into a
scene due to heavy lens distortion
caused by wide-angle cameras with
field-of-view 120◦ < FOV < 140◦.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Lens Projection Models: (a) Standard Camera Pinhole Projection Model. (b) Larger FOV Lens Orthogonal Projection Model.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Proposed Self-calibration design
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Pre-processing of line candidates and
Estimation of Straightness constraint.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Schematic of distortion parameter
estimation using LM-optimization in normal
mode and parameter sharing mode.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Severe distortion cases rectified
using several approaches
[28,29], proposed method with
and without empirical γ-hyper
parameter.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Data acquisition scenarios using
various camera models.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Auto-parking scenario on rear fisheye camera: Real-time visual SLAM pipeline on lens distortion rectified sensor data.
Feasible Self-Calibration of Larger Field-of-
View (FOV) Camera Sensors for ADAS
Fisheye/Omnidirectional View in Autonomous Driving IV

Weitere ähnliche Inhalte

Was ist angesagt?

Moving object detection
Moving object detectionMoving object detection
Moving object detection
Manav Mittal
 
Drowsiness Detection using machine learning (1).pptx
Drowsiness Detection using machine learning (1).pptxDrowsiness Detection using machine learning (1).pptx
Drowsiness Detection using machine learning (1).pptx
sathiyasowmi
 
smart traffic control system using canny edge detection algorithm (4).pdf
smart traffic control system using canny edge detection algorithm (4).pdfsmart traffic control system using canny edge detection algorithm (4).pdf
smart traffic control system using canny edge detection algorithm (4).pdf
GYamini22
 

Was ist angesagt? (20)

Image Restoration for 3D Computer Vision
Image Restoration for 3D Computer VisionImage Restoration for 3D Computer Vision
Image Restoration for 3D Computer Vision
 
Drowsiness State Detection of Driver using Eyelid Movement- IRE Journal Confe...
Drowsiness State Detection of Driver using Eyelid Movement- IRE Journal Confe...Drowsiness State Detection of Driver using Eyelid Movement- IRE Journal Confe...
Drowsiness State Detection of Driver using Eyelid Movement- IRE Journal Confe...
 
Driver Drowsiness Detection Review
Driver Drowsiness Detection ReviewDriver Drowsiness Detection Review
Driver Drowsiness Detection Review
 
Driver fatigue detection system
Driver fatigue detection systemDriver fatigue detection system
Driver fatigue detection system
 
Autonomous driving system (ads)
Autonomous driving system (ads)Autonomous driving system (ads)
Autonomous driving system (ads)
 
Yolo
YoloYolo
Yolo
 
Moving object detection
Moving object detectionMoving object detection
Moving object detection
 
Drowsiness detection ppt
Drowsiness detection pptDrowsiness detection ppt
Drowsiness detection ppt
 
Image segmentation with deep learning
Image segmentation with deep learningImage segmentation with deep learning
Image segmentation with deep learning
 
Drowsiness Detection using machine learning (1).pptx
Drowsiness Detection using machine learning (1).pptxDrowsiness Detection using machine learning (1).pptx
Drowsiness Detection using machine learning (1).pptx
 
Lecture 09: SLAM
Lecture 09: SLAMLecture 09: SLAM
Lecture 09: SLAM
 
FIND MISSING PERSON USING AI (ANDROID APPLICATION)
FIND MISSING PERSON USING AI (ANDROID APPLICATION)FIND MISSING PERSON USING AI (ANDROID APPLICATION)
FIND MISSING PERSON USING AI (ANDROID APPLICATION)
 
Lane detection by use of canny edge
Lane detection by use of canny edgeLane detection by use of canny edge
Lane detection by use of canny edge
 
VANET (BY-VEDANT)
VANET (BY-VEDANT)VANET (BY-VEDANT)
VANET (BY-VEDANT)
 
YOLO
YOLOYOLO
YOLO
 
Techniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous DrivingTechniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous Driving
 
Practical Swarm Optimization (PSO)
Practical Swarm Optimization (PSO)Practical Swarm Optimization (PSO)
Practical Swarm Optimization (PSO)
 
PR-411: Model soups: averaging weights of multiple fine-tuned models improves...
PR-411: Model soups: averaging weights of multiple fine-tuned models improves...PR-411: Model soups: averaging weights of multiple fine-tuned models improves...
PR-411: Model soups: averaging weights of multiple fine-tuned models improves...
 
smart traffic control system using canny edge detection algorithm (4).pdf
smart traffic control system using canny edge detection algorithm (4).pdfsmart traffic control system using canny edge detection algorithm (4).pdf
smart traffic control system using canny edge detection algorithm (4).pdf
 
(SURVEY) Semi Supervised Learning
(SURVEY) Semi Supervised Learning(SURVEY) Semi Supervised Learning
(SURVEY) Semi Supervised Learning
 

Ähnlich wie Fisheye/Omnidirectional View in Autonomous Driving IV

MotorEyes MQP Poster_Onal1301 (1)
MotorEyes MQP Poster_Onal1301 (1)MotorEyes MQP Poster_Onal1301 (1)
MotorEyes MQP Poster_Onal1301 (1)
Wut Yee Oo
 
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
Md. Mehedi Hasan
 
Portfolio - Ramsundar K G
Portfolio - Ramsundar K GPortfolio - Ramsundar K G
Portfolio - Ramsundar K G
Ramsundar K G
 

Ähnlich wie Fisheye/Omnidirectional View in Autonomous Driving IV (20)

Fisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving VFisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving V
 
Fisheye-Omnidirectional View in Autonomous Driving III
Fisheye-Omnidirectional View in Autonomous Driving IIIFisheye-Omnidirectional View in Autonomous Driving III
Fisheye-Omnidirectional View in Autonomous Driving III
 
Fisheye based Perception for Autonomous Driving VI
Fisheye based Perception for Autonomous Driving VIFisheye based Perception for Autonomous Driving VI
Fisheye based Perception for Autonomous Driving VI
 
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
 
Low-cost infrared camera arrays for enhanced capabilities
Low-cost infrared camera arrays for enhanced capabilitiesLow-cost infrared camera arrays for enhanced capabilities
Low-cost infrared camera arrays for enhanced capabilities
 
J017377578
J017377578J017377578
J017377578
 
Real-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURFReal-time Moving Object Detection using SURF
Real-time Moving Object Detection using SURF
 
Realtime pothole detection system using improved CNN Models
Realtime pothole detection system using improved CNN ModelsRealtime pothole detection system using improved CNN Models
Realtime pothole detection system using improved CNN Models
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IV
 
MotorEyes MQP Poster_Onal1301 (1)
MotorEyes MQP Poster_Onal1301 (1)MotorEyes MQP Poster_Onal1301 (1)
MotorEyes MQP Poster_Onal1301 (1)
 
VISpION Prospekt
VISpION ProspektVISpION Prospekt
VISpION Prospekt
 
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
Artifacts Detection by Extracting Edge Features and Error Block Analysis from...
 
Intelligente visie maakt drones autonoom
Intelligente visie maakt drones autonoomIntelligente visie maakt drones autonoom
Intelligente visie maakt drones autonoom
 
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
 
parking space counter [Autosaved] (2).pptx
parking space counter [Autosaved] (2).pptxparking space counter [Autosaved] (2).pptx
parking space counter [Autosaved] (2).pptx
 
Video Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFTVideo Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFT
 
Object Detection for Autonomous Cars using AI/ML
Object Detection for Autonomous Cars using AI/MLObject Detection for Autonomous Cars using AI/ML
Object Detection for Autonomous Cars using AI/ML
 
Vehicle Recognition at Night Based on Tail LightDetection Using Image Processing
Vehicle Recognition at Night Based on Tail LightDetection Using Image ProcessingVehicle Recognition at Night Based on Tail LightDetection Using Image Processing
Vehicle Recognition at Night Based on Tail LightDetection Using Image Processing
 
Portfolio - Ramsundar K G
Portfolio - Ramsundar K GPortfolio - Ramsundar K G
Portfolio - Ramsundar K G
 

Mehr von Yu Huang

Mehr von Yu Huang (20)

Application of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous DrivingApplication of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous Driving
 
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
 
Data Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous DrivingData Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous Driving
 
BEV Joint Detection and Segmentation
BEV Joint Detection and SegmentationBEV Joint Detection and Segmentation
BEV Joint Detection and Segmentation
 
BEV Object Detection and Prediction
BEV Object Detection and PredictionBEV Object Detection and Prediction
BEV Object Detection and Prediction
 
Prediction,Planninng & Control at Baidu
Prediction,Planninng & Control at BaiduPrediction,Planninng & Control at Baidu
Prediction,Planninng & Control at Baidu
 
Cruise AI under the Hood
Cruise AI under the HoodCruise AI under the Hood
Cruise AI under the Hood
 
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
 
Scenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous DrivingScenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous Driving
 
How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?
 
Annotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous DrivingAnnotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous Driving
 
Simulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atgSimulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atg
 
Multi sensor calibration by deep learning
Multi sensor calibration by deep learningMulti sensor calibration by deep learning
Multi sensor calibration by deep learning
 
Prediction and planning for self driving at waymo
Prediction and planning for self driving at waymoPrediction and planning for self driving at waymo
Prediction and planning for self driving at waymo
 
Jointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planningJointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planning
 
Data pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous drivingData pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous driving
 
Open Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planningOpen Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planning
 
Lidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rainLidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rain
 
Autonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucksAutonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucks
 
3-d interpretation from single 2-d image V
3-d interpretation from single 2-d image V3-d interpretation from single 2-d image V
3-d interpretation from single 2-d image V
 

Kürzlich hochgeladen

Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Kandungan 087776558899
 
DeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakesDeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakes
MayuraD1
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
mphochane1998
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
Epec Engineered Technologies
 
Hospital management system project report.pdf
Hospital management system project report.pdfHospital management system project report.pdf
Hospital management system project report.pdf
Kamal Acharya
 

Kürzlich hochgeladen (20)

Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 
DeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakesDeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakes
 
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
 
DC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationDC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equation
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdf
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
Hostel management system project report..pdf
Hostel management system project report..pdfHostel management system project report..pdf
Hostel management system project report..pdf
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.ppt
 
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced LoadsFEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
 
Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network Devices
 
Wadi Rum luxhotel lodge Analysis case study.pptx
Wadi Rum luxhotel lodge Analysis case study.pptxWadi Rum luxhotel lodge Analysis case study.pptx
Wadi Rum luxhotel lodge Analysis case study.pptx
 
Moment Distribution Method For Btech Civil
Moment Distribution Method For Btech CivilMoment Distribution Method For Btech Civil
Moment Distribution Method For Btech Civil
 
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARHAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
 
School management system project Report.pdf
School management system project Report.pdfSchool management system project Report.pdf
School management system project Report.pdf
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
Hospital management system project report.pdf
Hospital management system project report.pdfHospital management system project report.pdf
Hospital management system project report.pdf
 

Fisheye/Omnidirectional View in Autonomous Driving IV

  • 2. Outline • FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System • Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline • SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving • Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the ADAS
  • 3. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System • Automated Parking is a low speed maneuvering scenario which is quite unstructured and complex, requiring full 360° near-field sensing around the vehicle. • In this paper, discuss the design and implementation of an automated parking system from the perspective of camera based deep learning algorithms. • provide a holistic overview of an industrial system covering the embedded system, use cases and the deep learning architecture. • demonstrate a real-time multi-task deep learning network called FisheyeMultiNet, which detects all the necessary objects for parking on a low- power embedded system. • FisheyeMultiNet runs at 15 fps for 4 cameras and it has three tasks namely object detection, semantic segmentation and soiling detection. • release a partial dataset of 5,000 images containing semantic segmentation and bounding box detection ground truth via WoodScape project.
  • 4. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System
  • 5. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System Classification of Parking scenarios - (a) Parallel Backward Parking (b) Perpendicular Backward Parking (c) Perpendicular Forward Parking (d) Ambiguous Parking and (e) Fishbone Parking with roadmarkings.
  • 6. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System
  • 7. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System Illustration of FisheyeMultiNet architecture comprising of object detection, semantic segmentation and soiling detection tasks.
  • 8. FisheyeMultiNet: Real-time Multi-task Learning Architecture for Surround-view Automated Parking System
  • 9. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline • Object detection is a comprehensively studied problem in autonomous driving. • However, it has been relatively less explored in the case of fisheye cameras. • The standard bounding box fails in fisheye cameras due to the strong radial distortion, particularly in the image’s periphery. • explore better representations like oriented bounding box, ellipse, and generic polygon for object detection in fisheye images in this work. • use the IoU metric to compare these representations using accurate instance segmentation ground truth. • design a novel curved bounding box model that has optimal properties for fisheye distortion models. • also design a curvature adaptive perimeter sampling method for obtaining polygon vertices, improving relative mAP score by 4.9% compared to uniform sampling. • Overall, the proposed polygon model improves mIoU relative accuracy by 40.3%. • The dataset comprising of 10,000 images along with ground truth will be made public.
  • 10. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline Left: Illustration of fisheye distortion of projection of an open cube. A 4th-degree polynomial model radial distortion. can visually notice that box matures to a curved box. Right: propose the Curved Bounding Box using a circle with an arbitrary center and radius, as illustrated. It captures the radial distortion and obtains a better footpoint. The center of the circle can be equivalently reparameterized using the object center (xˆ, yˆ).
  • 11. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline
  • 12. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline Generic Polygon Representations. Left: Uniform angular sampling where the intersection of the polygon with the radial line is represented by one parameter per point (r). Middle: Uniform contour sampling using L2 distance. It can be parameterized in polar co-ordinates using 3 parameters (r, θ, α). α denotes the number of polygon vertices within the sector, and it may be used to simplify the training. Alternatively, 2 parameters (x,y) can be used, as shown in the figure on the right. Right: Variable step contour sampling. It is shown that the straight line in the bottom has less number of points than curved points such as the wheel. This representation allows to maximize the utilization of vertices according to local curvature.
  • 13. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline FisheyeYOLO is an extension of YOLOv3 which can output different output representation
  • 14. Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline
  • 15. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving • Four fisheye cameras with a 190° field of view cover the 360° around the vehicle. • Due to its high radial distortion, the standard algorithms do not extend easily. • In this work, release a synthetic version of the surround-view dataset, covering many of its weaknesses and extending it. • Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth. • Secondly, WoodScape did not have all four cameras simultaneously in order to sample diverse frames. • However, this means that multi-camera algorithms cannot be designed, which is enabled in the new dataset. • implemented surround-view fisheye geometric projections in CARLA Simulator matching WoodScape’s configuration and created SynWoodScape. • release 80k images with annotations for 10+ tasks. • also release the baseline code and supporting scripts.
  • 16. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 17. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 18. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 19. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 20. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 21. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 22. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving Overview of Surround View cameras based multi-task visual perception framework. The distance estimation task (blue block) makes use of semantic guidance and dynamic object masking from semantic/motion estimation (green and blue haze block) and camera-geometry adaptive convolutions (orange block). Additionally, guide the detection decoder features (gray block) with the semantic features. The encoder block (shown in the same color) is common for all the tasks. The framework consists of processing blocks to train the self-supervised distance estimation (blue blocks) and semantic segmentation (green blocks), motion segmentation (blue haze blocks), and polygon-based fisheye object detection (gray blocks). obtain Surround View geometric information by post- processing the predicted distance maps in 3D space (perano block). The camera tensor Ct (orange block) helps OmniDet yield distance maps on multiple camera- viewpoints and make the network camera independent.
  • 23. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 24. SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
  • 25. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS • This paper proposes a self-calibration method that can be applied for multiple larger field-of-view (FOV) camera models on ADAS. • Firstly, perform steps such as edge detection, length thresholding, and edge grouping for the segregation of robust line candidates from the pool of initial distortion line segments. • A straightness cost constraint with a cross-entropy loss was imposed on the selected line candidates, thereby exploiting that loss to optimize the lens-distortion parameters using the Levenberg–Marquardt (LM) optimization approach. • The best-fit distortion parameters are used for the undistortion of an image frame, thereby employing various high-end vision-based tasks on the distortion-rectified frame. • investigation on experimental approaches such as parameter sharing between multiple camera systems and model-specific empirical γ-residual rectification factor. • The quantitative comparisons between the proposed method and traditional OpenCV method on KITTI dataset with synthetically generated distortion ranges. • a pragmatic approach of qualitative analysis has been conducted through streamlining high-end vision-based tasks such as object detection, localization, and mapping, and auto-parking on undistorted frames.
  • 26. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Proposed Pipeline on ADAS workbench (a) ADAS Platform: Camera sensors setup and image acquisition
  • 27. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Proposed Pipeline on ADAS workbench (b) Proposed method with block schematics.
  • 28. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Structural anomaly induced into a scene due to heavy lens distortion caused by wide-angle cameras with field-of-view 120◦ < FOV < 140◦.
  • 29. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Lens Projection Models: (a) Standard Camera Pinhole Projection Model. (b) Larger FOV Lens Orthogonal Projection Model.
  • 30. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Proposed Self-calibration design
  • 31. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Pre-processing of line candidates and Estimation of Straightness constraint.
  • 32. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Schematic of distortion parameter estimation using LM-optimization in normal mode and parameter sharing mode.
  • 33. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 34. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 35. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 36. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 37. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Severe distortion cases rectified using several approaches [28,29], proposed method with and without empirical γ-hyper parameter.
  • 38. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 39. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 40. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Data acquisition scenarios using various camera models.
  • 41. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 42. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 43. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 44. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 45. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 46. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 47. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 48. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 49. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS
  • 50. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS Auto-parking scenario on rear fisheye camera: Real-time visual SLAM pipeline on lens distortion rectified sensor data.
  • 51. Feasible Self-Calibration of Larger Field-of- View (FOV) Camera Sensors for ADAS