The document discusses the design of a robotic system called SPARSH. It involves implementing techniques like machine vision, simultaneous localization and mapping (SLAM), and path detection algorithms using cameras to allow the robot to navigate autonomously and map its surroundings. The robot will be tested to perform surveillance and disaster management tasks. The project aims to develop an advanced, field-ready robotic system incorporating solar charging by 2014 through continued work by student teams.
2. India’s Space Missions ,especially recent
“Chandrayaan” and planned “Mangalyaan”.
Need for artificially intelligent surveillance
(Defense , Civil Structures, Calamity)
techniques.
Channelizing our ambition of contributing to
the country’s development in right direction.
3. A pre-project on SPARSH, hence the name
PURV SPARSH (“PURV” in Sanskrit means
“before”).
State of art design for all-terrain movement.
All the automation pertaining to image-
processing to be tested on PURV, reason
being the easier control parameters.
11. Color based segmentation-to segment the
region on the basis of color of desired threshold
value or containing maximum and minimum
marks.
Geometric segmentation- to segment the
region on the basis of the geometric appearance
of the object in any image.
Line segment detector-detection of line segment in an
image using the HoughTransform
Various shapes can be detected using Hough
transform, and that will be helpful in segmentation
12. Histogram equalization or enhancement-
The histogram of an image is the plot
between the pixel value and its frequency in
the whole image. Equalization is done to
uniformly distribute the frequencies of all
pixel values that results the clear visualization
of separate objects.
13. For third dimensional aspects of the
environment, various original ideas are to be
experimented in proper simulated
environment.The proposed experiments are-
3rd dimension sensing using camera only by
the concept of the change in area of an
object and its rate of change while the bot
is approaching or moving away.
14. 3rd dimension sensing using the impact of
light waves of different intensities on an
object: Observing the light intensities over
the object and its background with the
natural light on it. Further, comparing the
previous response with the response ,when
we bombard the extra light on it, the light
intensity of the source being known.
15. 3-D detection using multiple cameras- The
position of objects in image frames of all the
cameras would be detected and then all of
the frames would be superimposed.
The relative distance of the same objects in
all the frames would be compared and the
actual distance of the object from the lens of
the camera would be found out.
18. Simultaneous localization and
mapping (SLAM) is a technique used
by robots and autonomous vehicles to build
up a map within an unknown environment
(without a priori knowledge), or to update a
map within a known environment (with a
priori knowledge from a given map), while at
the same time keeping track of their current
location.(WIKIPEDIA)
19. Implementing a novel vSLAM (Visual SLAM), which is
based on visual sensors.
Continuous acquisition of images and storage at
various points during the movement in an unknown
terrain.
A novel idea of implementing this using only cameras
as the detector (All the previous designing used
LIDAR, SONAR or odometric actuators with the visual
actuators).
The distance to be actuated using the previously
explained methods
20. Using a camera mounted on 2 servo motors for
rotations along horizontal and perpendicular
planes with angle actuating sensors, for finding
the ϴ and φ coordinates of the robot in spherical
coordinate system.
Using the strength of wireless signals of xbee
being received from the robot for finding the
radial parameter of the robot.
The vSLAM system to be present at the control
station.
23. Activating the responses on basis of the results
of image processing.
Interfacing the MCU with the motors present on
module by proper circuit designing.
Programming the MCU for controlling the
motors on the basis of the signal received from
the control station.
Implementation of a PID controller based
control system
26. Continuous transmission of pictures taken by the
camera to the control station.
Interfacing of the receiver with the PC at the control
station.
Transmission of the signals achieved after the
processing in the PC by another transmitter.
Receiving the decision signals by receiver present on
SPARSH.
Interfacing of the receiver with the onboard MCU.
27.
28. Analyzing the pros and cons of having the
charging circuitry on board or having a charging
station.
Choosing the panel keeping in mind the weight ,
flexibility and efficiency parameters in mind. (7.2
V, 2000mA battery to be charged.)
Incorporation of solar power devices ,viz. the
panels and circuitry on the basis of the energy
demands of the module using the LiPo batteries.
29. Since advanced versions of the project are
expected , so a meticulous documentation
would be done for proper knowledge
transfer.
Recruitment of students from every following
batch would be done by the leading members
of the present batch.
Attainment of a field ready SPARSH by the
end of 2014.
30. A product with immense applications in space research,
defense, civil structure surveillance as well as in disaster
management.
Multiple patents and publications in the areas pertaining
to the project.
A platform for students from every forthcoming batch to
hone their skills and contribute to the wide range of
cutting edge research areas this project covers ,with the
development of the advanced versions.
A contribution of NIT-S fraternity in technological
advancement of the country, which expects itself to be a
super-power by 2020.
31. Winners of 2 robotics events at “Techniche-2012” IIT-
Guwahati, one of them being autonomous robotics event.
Winners of all the events we participated at NIT-S
pertaining to robotics (5 in total).
Organizers of first wireless robotics event organized in the
history of NIT-S.
Our team member Shivam Chaubey , won the “Best
Innovator” award given by Department of Metallurgy at
IIT-Varanasi during his school days(Class 12th). Moreover
he had won 1st prize in robotics event (Class 11th )
independently competing with the teams of engineering
students from best T-Schools of the country (IIT-Kanpur,
IIT-Varanasi etc).
32.
33. Procurement
• Mechanical
• Electronics and
Power
Mechanical
Design &
Implementation
• Realizing the CAD
model
Electronic and
Wireless Control
• Controlling the
movements of Robot
using wireless
manually
• Interfacing the
hardware with PC
and implementing
wireless controls.
• GPS based control
MachineVision
• Colored object
tracking
• Development of path
detection algorithm
• Applying the path
detection algorithm
• Interfacing the
MachineVision with
GPS.
Advanced
Control
• Designing advanced
control system for
fast response
34. • Procurement
1 mnth
• Mechanical
Design and
Implementation
1 mnth • Electronics and
wireless control
1 mnth
• MachineVision
2 mnths • Advanced
Control and
Machine
Learning
2 mnths
35. SadayukiTsudgawa ,Vision-BasedVehicles in Japan: MachineVision Svstems and Driving Control
Svstems, IEEETRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 41, NO. 4, AUGUST
1994
Hans Moravec,Obstacle avoidance and navigation in the real world by a seeing robot rover. Hans
Moravec, Robotics Institute, Carnegie Mellon University and doctoral dissertation, Stanford
University.
Niklas Karlsson, Enrico Di Bernardo, Jim Ostrowski, Luis Goncalves, Paolo Pirjanian, and Mario E.
Munich,The vSLAM Algorithm for Robust Localization and Mapping .Proc. of Int. Conf. on
Robotics and Automation (ICRA) 2005.
PassiveThree Dimensional Face Recognition Using Iso-Geodesic Contours and Procrustes
AnalysisSina Jahanbin · Rana Jahanbin · Alan C. Bovik, Springer Science+Business Media New
York 2011
Anatoly Baksheev, Itseez Inc. USING NVIDIA GPUSWITH OPENCV FOR ACCELERATED HIGH
PERFORMANCE COMPUTERVISION.
Dickmanns, E.D. ; Inst. fuer Systemdynamik und Flugmechanik, Univ. der Bundeswehr Munchen,
Neubiberg, Germany, Intelligent Vehicle Symposium, 2002. IEEE (Volume:1 )
www.wikipedia.com
Video Lectures on “image processing from mars to Hollywood”-Duke university
Video Lectures on machine learning - Stanford university