This document proposes a smart city surveillance system that utilizes vehicle-mounted cameras and sensors to crowdsource real-time data about urban events and conditions. Vehicles would collect image and location data using cameras and GPS and upload it to a cloud server. The server would store the data and make it accessible to the public. The system aims to provide detailed, efficient monitoring of cities to benefit residents and officials. It was tested and shown to perform well under increasing workload.
IRJET- Automatic Vehicle Monitoring System based on Wireless Sensor Network
Doron REU Final Paper
1. Smart City Surveillance Running on Vehicles
Ma’ayan Doron, Author
Department of Computer Science
University of Virginia
Charlottesville, VA 22904
Alex and Sam Canner, Research Partners
Purdue University School of Engineering
Indiana University-Purdue University Indianapolis
Indianapolis, IN 46202
Hongbo Liu, Mentor
Purdue University School of Engineering
Indiana University - Purdue University Indianapolis
Indianapolis, IN 46202
Abstract—The urban population of the United States has been
increasing over the past several years. To manage the needs
of expanding cities and their citizens, this paper proposes the
development of a model for smart city surveillance that runs
on vehicles by utilizing a variety of vehicle-mounted sensing
capabilities. The model aims to crowdsource real-time urban
events. Vehicles are the logical choice for this endeavor, as the
technology present in current models continues to expand and
incorporate many features such as GPS services and mobile
phone interactions. Additionally, the development of vehicle-
mounted cameras has made collecting a vast amount of data
on not only images, but also position and acceleration, easier.
Thus vehicles with mounted cameras will leverage existing
communication and sensing infrastructure to collect data about
events in their environment. Once collected, the data contributed
from multiple participants can then be uploaded to a cloud server
to develop a detailed view of an event as well as provide significant
statistics about urban communities. The cloud server will contain
a database filled with requests, conditions, and vehicle target
locations. This information will then be made accessible to the
public via connection to this server. Testing of this framework
has shown that the system performs efficiently, keeping client
wait times low even as workload increases.
I. INTRODUCTION
Urban communities are hubs for innovation and employ-
ment, and therefore naturally draw in large, ever-growing pop-
ulations. This growth, however, results in an increasing need
to more effectively manage traffic congestion and abnormal
road conditions due to frequent and heavy daily use. To ease
transportation for pedestrians, cyclists, and drivers, participa-
tory sensing networks have become a focus of many research
initiatives. These networks allow both city government profes-
sionals and residents to more easily gather, analyze, and share
data about local phenomena to better inform policy-making as
well as the daily lives and travel routes of citizens in cities
that are constantly moving and changing.
Mobile devices, especially smartphones, are the key to
enabling participatory sensing networks. Over two billion
people carry smartphones. These devices contain a multitude
of sensors, such as cameras, microphones, and GPS, that
can collect information about users local environments. By
leveraging these sensors, a participatory sensing network can
collect information about sidewalk and roadway conditions so
that they may be accessed and utilized in the daily lives of
residents.
The majority of current smart city projects rely on fixed
infrastructure to collect surveillance data in urban areas instead
of mobile devices. Such an approach, however, is insufficient.
Fixed infrastructure lacks mobility, meaning that coverage in a
requested area may be insufficient. Additionally, fixed infras-
tructure can be expensive to maintain. Toward this end, our
research team proposes the development of a novel paradigm
for a smart city surveillance server running on vehicles that
leverages the sensing capabilities of vehicle-mounted devices.
To circumvent the above-described issues of mobility and cost,
our team leverages the existing sensing capabilities carried by
vehicles and mobile devices.
The main idea for our approach is to empower each vehicle
to sense and share data about events occurring in their vicinity
via Android devices mounted on vehicle dashboards. Vehicles
are highly mobile, solving the issue of insufficient coverage,
and by leveraging existing mobile device sensors, we avoid the
maintenance expenses bound to fixed infrastructure. Through
this paradigm, we provide urban communities with real-time,
fine-grained street surveillance through crowdsensing image
information. At this point in time, we have chosen to focus
strictly on street-event surveillance, specifically the abnormal
conditions of sidewalks and road facilities. Road facility
concerns include vehicle traffic and road surface quality (i.e.
are there potholes?). Sidewalk conditions include factors like
accumulated snow, pedestrian traffic flow, and sidewalk clos-
ings due to construction. By providing such information to
large urban communities, our smart city surveillance project
aims to build up the developing smart city with enhanced
safety and efficiency for all of its inhabitants.
II. RELATED WORK
Although the participatory sensing network is a newer
concept that requires further research and development, many
existing papers discuss successful frameworks that are under
continuous development, as well as how to incentivize citizens
to participate in these networks. Our team carefully reviewed
these papers while building our own project, and incorporated
2. the authors ideas into our own work.
Burke et al. [2] discuss the concept of participatory sensing,
which involves leveraging the sensors embedded in smart-
phones as location-aware data collection instruments. Smart
city surveillance systems can task mobile devices to form in-
teractive, participatory sensor networks through which profes-
sionals and the public can gather, analyze, and share data about
their urban environments. The authors propose utilizing the
microphone, image, and GPS sensors within mobile devices
to collect data. Our team adopted the idea of utilizing image
sensors, specifically the cameras of Android mobile devices
that are mounted on vehicle dashboards, to collect said data.
We also adopted the idea of using the GPS sensors in these
mobile devices to record the location from which a client is
uploading data to the cloud server. The authors also suggested
that the coverage of under-reported areas could be improved by
prompting user entry based on their location. Our team decided
that to improve coverage through our surveillance system, we
would provide administrators special privileges to access the
server and dictate destinations for vehicles to visit in order to
collect data.
Kanhere [3] introduces the concept of environmental-centric
sensing applications, which focus on collecting data about the
surroundings of users and presenting the captured data to the
larger community for their use. As our smart city surveillance
system is meant to collect information about road and sidewalk
conditions, we utilized the discussed example applications
as models for how to collect environmental data and the
data variables that would need to be collected to adequately
describe events. For example, the author introduces a road and
traffic condition-sensing app named “Nericell,” which employs
microphone, camera, and GPS sensors to record data. Using
this as a model, our team decided to incorporate the camera
and GPS sensors to collect image data about the environments
of users by mounting mobile devices on vehicle dashboards.
Perera et al. [4] investigate the concept of sensing as a
service model. Our team used the outlined sensing as a service
model to design our own framework. This involves a cloud
server that handles various requests, which in our case are
the queries sent by clients and reports uploaded by vehicles.
The sensing services provided by mobile phones are used to
collect the data that is uploaded to the cloud server and stored
in a database and/or returned to the requester. The server also
has the ability to push requests for data to vehicles in areas
of interest via smartphone. This paper also touched on the
advantages of the sensing a service model, such as real-time
data collection and widespread coverage due to vehicle and
smartphone mobility, which our team incorporated into our
motivations for developing our smart city surveillance system.
Although our surveillance system is still very much under
development, our team has discussed how to motivate individ-
uals to contribute data to our server. If there is little data about
a point of interest, then data collected about that phenomena
will not be reliable. Thus, it is important for individuals to feel
motivated to participate in our surveillance system. Jaimes et
al. [5] discuss the different incentive strategies to regularly
Fig. 1. Twelve-byte query format
and reliably incorporate individuals into crowdsensing tasks.
Although this paper proposed both monetary and nonmonetary
incentives, our team does not currently have the funding to
provide monetary compensation for users, and so we must
currently rely on what the authors of this paper call “col-
lective incentives.” In this approach, users are motivated to
work together for a common good, or a better community.
Unfortunately, as the authors pointed out, this approach risks
a situation in which many users are free riders, waiting for
others to upload information without contributing any data
reports themselves. Although the collective incentive approach
is flawed, we are currently relying on this strategy until we
have fully developed our server and can focus on incorporating
other incentives.
III. OVERVIEW
The main components of our vehicle-assisted crowdsensing
framework include a user end, central server, wireless network
infrastructure, and recruited vehicles. At a high-level view,
a query from a user arrives at the central server, which is
hosted on a cloud platform. Corresponding crowdsensing tasks
are created and scheduled by parsing and interpreting query
details. Vehicles are then recruited by the central server to
carry out these crowdsensing tasks. The server can then collect
the sensed data from recruited vehicles and store it in the
database for later retrieval. The resulting sensed data is then
returned to users in order to fulfill the sent queries.
IV. METHOD
In this section, I will outline the six main critical algorithms
and components of this project, including the customized user
query, the asynchronous nature of Node.js, MySQL database
interaction, the timestamp algorithm, concurrent query han-
dling, and the trajectory location system.
A. Customized User Query
Clients of this smart city surveillance system connect with
the server by sending it queries. These queries consist of
twelve bytes and contain the details of a client s request so that
the server can retrieve the requested information and return it
3. Fig. 2. The file retrieveData.js that forgoes the asynchronous behavior of
Node.js
to the client. As seen in Figure 1, the first six bytes of the query
identify the location of the client. This includes the geographic
information describing the road or location about which the
user is requesting data. The seventh and eighth bytes describe
the unique identification of the client that sent the query to
the server. The ninth byte identifies the classification of the
requested data (i.e. traffic, physical road conditions, etc.). The
final three bytes are reserved for any additional functions that
we wish to include later on as we continue to develop this
server.
Once the server receives a query, it feeds the string of
unsigned integers into a buffer. The buffer is then parsed
into separate integers and assigned to their appropriate, unique
variables (i.e. location, ID, request type, miscellaneous). This
way, the data can be more easily interpreted by the server and
stored in the database.
B. The Asynchronous Nature of Node.js
Node.js is an asynchronous language, meaning that our
server has trouble hosting multiple and simultaneous client
connections to the MySQL (My Structured Query Language)
database. To combat this issue, we implemented code to have
the server connect directly to the database and retrieve data
instead (Figure 2). The code allows the server to connect to the
database to retrieve, in the case of Figure 2, an identification
from the “destinations” table of the created database “liu”,
which is represented by the variable “id.”
As can be seen in Figure 2, a test sequence representing
a time is used to retrieve an ID from the MySQL database.
The “destinations” table of the database is searched to find
an ID with a time of entry that matches the inputted time of
the test sequence. This ID is then extracted from the database
and printed to the console of the user. Using this strategy, our
research team is able to forgo the restrictive, asynchronous
nature of Node.js by having a method to directly access
the database and retrieve data instead of connecting multiple
clients to the database to extract the requested data.
C. MySQL Database Interaction
Queries and reports sent to the server by clients are sliced
into segments according to the format described in Figure 1
(i.e. bytes 1 to 6 describe location, bytes 7 and 8 represent the
unique identifications of clients, the 9th byte represents the
request type, and bytes 10-12 are reserved for miscellaneous
information to be established in the future as we continue to
develop the server). Once the data is split and assigned to the
appropriate variables, the server can send the information to
the MySQL database for storage and later retrieval. Current
tables in the database include “client queries,” “destinations,”
and “vehicle report.”
The client queries table holds data describing the requests
for information made by clients. This includes the time of
the request, the identification of the client, the location the
client is requesting information about, the type of request (ex.
traffic flow), and any additional information we want to store
in the future, for which we have created a row in the table
labeled “miscellaneous.”
The destinations table holds information describing data
about the final destinations of vehicles needed for data
collection, and the locations vehicles need to visit to get to
those final destinations, similar to checkpoints. This table
includes the following descriptors:
• Destination: The location of the final destination as well
as the checkpoint locations a vehicle must pass through
on the way to the final destination
• ID: The identities of the vehicles/clients that are traveling
to the desired locations
• Completed: Records whether the vehicle has reached a
specific location
• Time: The time a location was reached
The vehicle report table holds information about events
and conditions for which vehicles provide information. This
includes the time the report was received by the server, the
unique identification of the vehicle that sent the report, the
location the report is describing, and the condition of the
location (i.e. the physical description about the specified the
location).
Users can also retrieve data about road conditions from the
MySQL database so that they can plan their days or driving
routes to avoid traffic or unsafe road conditions. Furthermore,
administrators can directly access the database to input where
to send a vehicle. This way, vehicles can be directed to collect
information about a certain area that may not have sufficient
coverage. An administrator also has the ability to check where
any active vehicle is at any time. This allows administrators
to view where vehicles are located along their assigned route
and whether vehicles are available at or around areas for which
data is being requested.
D. Timestamp Algorithm
This algorithm records the time that data is inputted into
the MySQL database or time a client sends a query in UTC
4. Fig. 3. The city of Indianapolis overlaid with our proposed coordinate system
(Coordinated Universal Time) format. This time is recorded
and stored in the database as appropriate.
E. Concurrent Query Handling
Our project requires a server that accepts multiple, concur-
rent client connections. Therefore, our team decided to utilize
Node.js in conjunction with Socket.io. Node.js is especially
useful when requiring a persistent connection and handling
multiple concurrent clients. Socket.io allows for two-way
communication where each side (both client and server) has
the ability to initiate a request. Furthermore, Socket.io is
well-suited for synchronized communication, meaning that the
server and clients can communicate immediately in real-time.
F. Trajectory Location System
Our team has developed a method to track vehicle locations
on a map of Indianapolis, which I will now propose. Firstly,
we overlay the city of Indianapolis with a coordinate system.
Our research team decided that storing the entire value of the
longitude and latitude of a vehicle s location would require far
more bytes than necessary, and so a coordinate system would
be more appropriate. This way, our system will not have to
deal with negative trajectories, either. The bottom-left corner
of the city will serve as the origin so that a vehicle s location
along the eastern axis can be measured via x-coordinate, and
along the northern axis via y-coordinate (Figure 3). This serves
as an efficient way for vehicles to report their location as well
as have destinations sent to them.
V. SYSTEM DESIGN
This section outlines both the Task Data Model and Client
Architecture for our smart city surveillance system. While
the majority of the below diagrams are implemented, few
components are still under development.
Fig. 4. An overview of how the components of our projects framework interact
A. Take Data Model
Figure 4 describes the framework briefly explained in the
“Overview” section of this paper. The model contains point-
ers, which represent how the different components of the
framework interact, specifically which variables from certain
stages are required by components of other stages. The mobile
client s functions enable it to check on the status of its queries:
The client is capable of sending a query to our server to request
information as well as checking on the status of a query sent
to the server. The mobile client has access to information
describing the details of the task or query that is shared
with the server. This includes aspects such as the current
stage the server is at while fetching the requested information,
the task or query s unique identifier, a description of the
requested information, etc. Similarly, the server has access
to this information, as well as aspects such as the current
number of clients connected to the server, the identification
of the requested information that is stored in the database, the
name and location of a vehicle that is sending a report to the
server, etc.
B. Client Architecture
Figure 5 describes the overall architecture of the previously
mentioned mobile client. The main application for this smart
city surveillance system is made up of several levels that are
conceptually layered on top of each other. This means that
levels that are higher in the architecture are designed to rely
on the levels below them.
The User Interface layer displays the most current infor-
mation to clients regarding the current progress of any tasks
or queries sent by a client. The User Interface is a visual
5. Fig. 5. An overview of the mobile client architecture
representation meant for the benefit of users, having no other
functionality than allowing for user interaction with lower
layers and displaying information to the user. For example,
the user can interact with the interface to view requested data
reports after sending a query, or view the status of a query
that was recently sent to the server.
The lower levels of the framework provide its main func-
tionality. For example, The Task Manager Service in the
application layer works as a background service during the
execution of tasks and acts as a middleman between other
components of the framework. In other words, it is the
centralized control of the system; it coordinates the control
flow of multiple tasks and grants permission for data flow to
occur between various parts of the framework.
The Server Communicator manages all of the communi-
cation with the server. It contains the knowledge of how
data must be presented to the server and the form in which
it must be retrieved for users. The Server Communicator
transforms the data s representation back and forth using
parsing, serialization, and deserialization methods. No other
module has access to the server s implementation.
The other modules present in the architecture are meant to
create, access, or modify generated data and meta-information
to meet the needs of the Task Manager, User Interface, and
Server Communicator.
VI. RESULTS
Using Node.js and Socket.io, we have programmed our
server to have the following capabilities:
• The server accepts client connections
• Clients are able to submit queries to the server
• The server can access the MySQL database
• The database can store the queries it receives
Fig. 6. A graph showing the wait times associated with increasing numbers
of concurrent clients
• The database saves information clients send to the server
in data reports
• The server can respond to clients and confirm their
connection
• The server can dictate vehicle target locations to clients
• The database sends requested data to clients
• The database can retrieve vehicle locations
Through these developments, our research team has created
a server through which clients can both send and request
data about road conditions, and that can communicate with
a database to store information for later retrieval. This way,
the server can aid users in planning their day-to-day lives and
driving routes efficiently.
In addition to these accomplishments, we evaluated the
performance of our server, especially its ability to handle
multiple queries. To do this, we tested the effect of concurrent
clients on client wait times while communicating with the
server. The setup for this experiment required that we run
the server in one command prompt window, while opening
multiple other command prompt windows and executing the
clients code to connect them to our server. To evaluate the
efficiency of the server we connected an increasing number of
clients to the server simultaneously. As can be seen in Figure
6, we tested the efficiency of the server with 10, 21, 30, 42, 50,
and 57 concurrent clients. We then evaluated the wait times
for each group of clients by timing how long it took clients
to connect to the server (in 10−4
seconds), and plotted the
results in a box plot graph (Figure 6).
VII. DISCUSSION
The Client Wait Time versus Concurrent Clients graph
results show that as an increasing number of clients connect
to the server, there is little correlation with an increase in
median wait times; the asynchronous nature of Node.js allows
for the server to continue onto another task without having
to wait for the completion of its current task. These results
6. indicate that our server can efficiently handle a large number
of concurrent client connections. This is important, because if
a client is requesting information about the traffic congestion
levels on a roadway they are about to turn down, the server
must be able to quickly process their request and fetch the
desired information. If the server is handling multiple queries
and takes five minutes to return the requested information, then
the server cannot adequately serve its purpose for users trying
to efficiently plan their daily routes.
However, as the amount of concurrently connected clients
increases, the standard deviation and maximum wait times also
increase. Although overall median wait times are generally
similar despite the growing number of concurrent clients, the
maximum wait times for clients become longer over time. This
is an issue we hope to improve upon in the future as server
development continues.
VIII. FUTURE WORK
As we continue to develop this smart city surveillance
server, we hope to further extend its abilities. Firstly, we would
like to have the server communicate directly with vehicles by
sending destinations to vehicles for additional data coverage.
Secondly, the server will be able to communicate with Android
devices. An Android device will proposedly be mounted onto
the vehicle dashboards of clients so that mobile sensors within
these devices can collect information about roadways and
sidewalks to send to the server for storage in the system s
MySQL database. This data can then be retrieved and sent to
requesters who send a query to the server as necessary. Once
this Android platform is fully developed and can communicate
with our server, the two systems will work in tandem to
supply citizens with data about their surroundings for easy and
efficient daily transportation. Thirdly, we plan to implement
our proposed trajectory location system.
Additionally, we intend to implement the trajectory dictation
function. Through this function an admin will be able to dictate
a destination to vehicles. The vehicles will communicate
with the trajectory and follow the path set out for them,
as detailed by the destination and intermediate checkpoint
locations from the database. More specifically, our current
approach to executing this task is to have a table of checkpoint
destinations and the final destination in the “destinations”
table, which was previously mentioned in this paper. As the
vehicle reaches every point in the communicated trajectory
path, it will be deleted from the table. Once empty, the table
will be dropped from the database. We plan to test this function
by utilizing a trajectory generator. Such data is provided
by many websites, specifically BerlinMod (http://dna.fernuni-
hagen.de/secondo/Berlin MOD/BerlinMOD.html) and Thomas
Brinkhoff s Network-Based Generator of Moving Objects
(http://iapg.jadehs.de/personen/brinkhoff/generator/).
Lastly, we intend to develop a Graphical User Interface
(GUI) to ease user interaction with our smart city surveillance
server.
IX. CONCLUSION
As urban populations continue to grow, so does the need
for smart city surveillance systems through which citizens can
report and request data about road and sidewalk conditions in
order to make daily travel more efficient. With the popular-
ization of mobile devices, such as smartphones, more people
than ever before are connected to the Internet wherever they
go. Our team takes advantage of this existing infrastructure,
as well as mobile devices many sensors and the high mobility
of vehicles, to create a participatory sensing network that can
provide citizens with information about the condition of their
surroundings.
The inner-workings of this server have been outlined above.
We described our framework, in which clients communicate
with our server, and which can, in turn, insert and retrieve
information about street conditions from a database. There
is still, however, a need to complete the development of the
Android platform and the trajectory dictation function, connect
the server to vehicles, and improve the ability of the server
to handle multiple and concurrent client connections so that
maximum wait times decrease.
Overall, we as a research team have created a server for
civic benefit that we believe contributes to the development of
smart city surveillance systems across cities worldwide.
REFERENCES
[1] Pavlov, D. V., Dr., Lupu, E., Dr. (2013). Hive: An Extensible and Scalable
Framework For Mobile Crowd-sourcing (Rep.). London Imperial College.
[2] Burke, Jeffrey A; Estrin, D; Hansen, Mark; Parker, Andrew; Ramanathan,
Nithya; Reddy, Sasank; et al.(2006). Participatory sensing. Center for
Embedded Network Sensing. UCLA: Center for Embedded Network
Sensing.
[3] Kanhere, Salil S. ”Participatory Sensing: Crowdsourcing Data from
Mobile Smartphones in Urban Spaces.” 2011 IEEE 12th International
Conference on Mobile Data Management (2011): n. pag. Print.
[4] Perera, Charith, Arkady Zaslavsky, Peter Christen, and Dimitrios Geor-
gakopoulos. ”Sensing as a Service Model for Smart Cities Supported
by Internet of Things.” Transactions on Emerging Telecommunications
Technologies Trans. Emerging Tel. Tech. 25.1 (2013): 81-93. Print.
[5] Jaimes, Luis G., Idalides J. Vergara-Laurens, and Andrew Raij. ”A Survey
of Incentive Techniques for Mobile Crowd Sensing.” IEEE Internet of
Things Journal IEEE Internet Things J. 2.5 (2015): 370-80. Print.