Dear Students
Ingenious techno Solution offers an expertise guidance on you Fianal Year IEEE & Non- IEEE Projects on the following domain
JAVA
.NET
EMBEDDED SYSTEMS
ROBOTICS
MECHANICAL
MATLAB etc
For further details contact us : http://mailto:enquiry@ingenioustech.in 044-42046028 or 8428302179.
we are located in the following venue:
Ingenious Techno Solution
#241/85, 4th floor
Rangarajapuram main road,
Kodambakkam (Power House)
http://www.ingenioustech.in/
08448380779 Call Girls In Civil Lines Women Seeking Men
2011 ieee projects
1. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
S.NO TITLE -2010 ABSTRACT DOMAIN PLATFORM
1. A Machine Learning TCP throughput prediction is an important capability for Networking .net
Approach to TCP networks where multiple paths exist between data
Throughput senders and receivers. In this paper, we describe a new
Prediction lightweight method for TCP throughput prediction. Our
predictor uses Support Vector Regression (SVR);
prediction is based on both prior file transfer history and
measurements of simple path properties. We evaluate
our predictor in a laboratory setting where ground truth
can be measured with perfect accuracy. We report the
performance of our predictor for oracular and practical
measurements of path properties over a wide range of
traffic conditions and transfer sizes. For bulk transfers in
heavy traffic using oracular measurements, TCP
throughput is predicted within 10% of the actual value
87% of the time, representing nearly a threefold
improvement in accuracy over prior history-based
methods. For practical measurements of path properties,
predictions can be made within 10% of the actual value
nearly 50% of the time, approximately a 60%
improvement over history-based methods, and with
much lower measurement traffic overhead. We
implement our predictor in a tool called PathPerf, test it
in the wide area, and show that PathPerf predicts TCP
throughput accurately over diverse wide area paths.
2. Feedback-Based A framework for designing feedback-based scheduling .net
Scheduling for Load- algorithms is proposed for elegantly solving the
Balanced Two-Stage notorious packet missequencing problem of a load-
Switches balanced switch. Unlike existing approaches, we show
that the efforts made in load balancing and keeping
packets in order can complement each other. Specifically,
at each middle-stage port between the two switch fabrics
of a load-balanced switch, only a single-packet buffer for
each virtual output queueing (VOQ) is required. Although
packets belonging to the same flow pass through
different middle-stage VOQs, the delays they experience
at different middle-stage ports will be identical. This is
made possible by properly selecting and coordinating
the two sequences of switch configurations to form a
joint sequence with both staggered symmetry property
and in-order packet delivery property. Based on the
staggered symmetry property, an efficient feedback
mechanism is designed to allow the right middle-stage
port occupancy vector to be delivered to the right input
port at the right time. As a result, the performance of
load balancing as well as the switch throughput is
significantly improved. We further extend this feedback
mechanism to support the multicabinet implementation
of a load-balanced switch, where the propagation delay
between switch linecards and switch fabrics is
nonnegligible. As compared to the existing load-balanced
switch architectures and scheduling algorithms, our
solutions impose a modest requirement on switch
hardware, but consistently yield better delay-throughput
performance. Last but not least, some extensions and
refinements are made to address the scalability,
implementation, and fairness issues of our solutions.
2. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
3. Trust management in In this paper, we propose a human-based model which .net
mobile ad hoc networks builds a trust relationship between nodes in an ad hoc
using a scalable maturity network. The trust is based on previous individual
based model experiences and on the recommendations of others. We
present the Recommendation Exchange Protocol (REP)
which allows nodes to exchange recommendations about
their neighbors. Our proposal does not require
disseminating the trust information over the entire
network. Instead, nodes only need to keep and exchange
trust information about nodes within the radio range.
Without the need for a global trust knowledge, our
proposal scales well for large networks while still
reducing the number of exchanged messages and
therefore the energy consumption. In addition, we
mitigate the effect of colluding attacks composed of liars
in the network. A key concept we introduce is the
relationship maturity, which allows nodes to improve
the efficiency of the proposed model for mobile
scenarios. We show the correctness of our model in a
single-hop network through simulations. We also extend
the analysis to mobile multihop networks, showing the
benefits of the maturity relationship concept. We
evaluate the impact of malicious nodes that send false
recommendations to degrade the efficiency of the trust
model. At last, we analyze the performance of the REP
protocol and show its scalability. We show that our
implementation of REP can significantly reduce the
number messages.
4. Online social networks OSNs applications, it is a location-based social network Network .net
services, security and privacy
of OSNs, and human mobility models based on social
network OSNs online service site focuses of social
networks or social relations among people, e.g., who
share interests and activities. A social network service
essentially consists of a representation of each user
(often a profile), his/her social links, and a variety of
additional services. Most social network services are web
based and provide means for users to interact over the
internet, such as e-mail and instant messaging. Although
online community services are sometimes considered as
a social network online community services are group-
centered. Social networking sites allow users to share
ideas, activities, events, and interests within their
individual networks.
5. SYNCHRONIZATION OF File synchronization in computing is the process of
LOCAL DESKTOP TO making sure that files in two or more locations are
INTERNET USING FILE updated through certain rules. In one-way file
TRANSFER PROTOCOL synchronization, also called mirroring, updated files are
copied from a 'source' location to one or more 'target'
locations, but no files are copied back to the source
location. In two-way file synchronization, updated files
are copied in both directions, usually with the purpose of
keeping the two locations identical to each other. In this
article, the term synchronization refers exclusively to
two-way file synchronization.
3. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
6. Intrusion Detection for Providing security in a distributed system requires more
Grid and Cloud than user authentication with passwords or digital
Computing certificates and confidentiality in data transmission. The
Grid and Cloud Computing Intrusion Detection System
integrates knowledge and behavior analysis to detect
intrusions.
7. Adaptive Physical Transmit power and carrier sense threshold are key
Carrier Sense in MAC/PHY parameters in carrier sense multiple access
Topology-Controlled (CSMA) wireless networks. Transmit power control has
Wireless Networks been extensively studied in the context of topology
control. However, the effect of carrier sense threshold on
topology control has not been properly investigated in
spite of its crucial role. Our key motivation is that the
performance of a topology-controlled network may
become worse than that of a network without any
topology control unless carrier sense threshold is
properly chosen. In order to remedy this deficiency of
conventional topology control, we present a framework
on how to incorporate physical carrier sense into
topology control. We identify that joint control of
transmit power and carrier sense threshold can be
efficiently divided into topology control and carrier
sense adaptation. We devise a distributed carrier sense
update algorithm (DCUA), by which each node drives its
carrier sense threshold toward a desirable operating
point in a fully distributed manner. We derive a sufficient
condition for the convergence of DCUA. To demonstrate
the utility of integrating physical carrier sense into
topology control, we equip a localized topology control
algorithm, LMST, with the capability of DCUA. Simulation
studies show that LMST-DCUA significantly outperforms
LMST and the standard
8. On the Quality of Service of We model the probabilistic behavior of a system Dependable .net
Crash-Recovery Failure comprising a failure detector and a monitored crash- and Security
Detectors recovery target. We extend failure detectors to take
account of failure recovery in the target system. This
involves extending QoS measures to include the
recovery detection speed and proportion of failures
detected. We also extend estimating the parameters of
the failure detector to achieve a required QoS to
configuring the crash-recovery failure detector. We
investigate the impact of the dependability of the
monitored process on the QoS of our failure detector.
Our analysis indicates that variation in the MTTF and
MTTR of the monitored process can have a significant
impact on the QoS of our failure detector. Our analysis
is supported by simulations that validate our
theoretical results.
4. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
9. Layered Approach using Intrusion detection faces challenges an intrusion
conditional random field detection system must constantly detect malicious
activities in a network and must perform efficiently to
cope with the large amount of network traffic. These
two issues of Accuracy and Efficiency using
Conditional Random Fields and Layered Approach. We
show that high attack detection accuracy can be
achieved by using Conditional Random Fields and high
efficiency by implementing the Layered Approach.
Experimental results on the benchmark KDD ’99
intrusion data set show that our proposed system
based on Layered Conditional Random Fields
outperforms other well-known methods such as the
decision trees and the naive Bayes. The improvement
in attack detection accuracy is very high, particularly,
for the U2R attacks (34.8 percent improvement) and
the R2L attacks (34.5 percent improvement).
Statistical Tests also demonstrate higher confidence in
detection accuracy for our method. Finally, we show
that our system is robust and is able to handle noisy
data without compromising performance.
10. Privacy-Preserving Sharing Privacy-preserving sharing of sensitive information Security and .net
of Sensitive Information (PPSSI) is motivated by the increasing need for entities privacy
(organizations or individuals) that don't fully trust
each other to share sensitive information. Many types
of entities need to collect, analyze, and disseminate
data rapidly and accurately, without exposing
sensitive information to unauthorized or untrusted
parties. Although statistical methods have been used
to protect data for decades, they aren't foolproof and
generally involve a trusted third party. Recently, the
security research community has studied—and, in a
few cases, deployed—techniques using secure,
multiparty function evaluation, encrypted keywords,
and private information retrieval. However, few
practical tools and technologies provide data privacy,
especially when entities have certain common goals
and require (or are mandated) some sharing of
sensitive information. To this end, PPSSI technology
aims to enable sharing information, without exposing
more than the minimum necessary to complete a
common task.
11. PEACE Security and privacy issues are of most concern in
pushing the success of WMNs(Wireless Mesh
Networks) for their wide deployment and for
supporting service-oriented applications. Despite the
necessity, limited security research has been
conducted toward privacy preservation in WMNs. This
motivates us to develop PEACE, a novel Privacy-
Enhanced
yet Accountable security framework, tailored for
WMNs
5. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
12. The Phish-Market Protocol: One way banks mitigate phishing's effects is to remove .net
Secure Sharing Between fraudulent websites or suspend abusive domain
Competitors names. The removal process, called a "take-down," is
often subcontracted to specialist firms, who refuse to
share feeds of phishing website URLs with each other.
Consequently, many phishing websites aren't
removed. The take-down companies are reticent to
exchange feeds, fearing that competitors with less
comprehensive lists might free-ride off their efforts.
Here, the authors propose the Phish-Market protocol,
which enables companies to be compensated for
information they provide to their competitors,
encouraging them to share. The protocol is designed
so that the contributing firm is compensated only for
those websites affecting its competitor's clients and
only those previously unknown to the receiving firm.
The receiving firm, on the other hand, is guaranteed
privacy for its client list. The protocol solves a more
general problem of sharing between competitors;
applications to data brokers in marketing, finance,
energy exploration, and beyond could also benefit.
13. Internet Filtering Issues Various governments have been considering .net
and Challenges mechanisms to filter out illegal or offensive Internet
material. The accompanying debate raises a number of
questions from a technical perspective. This article
explores some of these questions, such as, what
filtering techniques exist,are they effective in filtering
out the specific content, how easy is circumventing
them ,where should they be placed in the Internet
architecture.
14. Can Public-Cloud Security Because cloud-computing environments' security .net
Meet Its Unique vulnerabilities differ from those of traditional data
Challenges? centers, perimeter-security approaches will no longer
work. Security must move from the perimeter to the
virtual machines.
15. Encrypting Keys Securely Encryption keys are sometimes encrypted themselves; .net
doing that properly requires special care. Although it
might look like an oversight at first, the broadly
accepted formal security definitions for cryptosystems
don't allow encryption of key-dependent messages.
Furthermore, key-management systems frequently
use key encryption or wrapping, which might create
dependencies among keys that lead to problems with
simple access-control checks. Security professionals
should be aware of this risk and take appropriate
measures. Novel cryptosystems offer protection for
key-dependent messages and should be considered for
practical use. Through enhanced access control in key-
management systems, you can prevent security-
interface attacks.
16. Auto-Context and Its The notion of using context information for solving Pattern .net
Application to High-Level high-level vision and medical image segmentation Analysis and
Vision Tasks and 3D Brain problems has been increasingly realized in the field. Machine
Image Segmentation However, how to learn an effective and efficient Intelligence
context model, together with an image appearance
6. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
model, remains mostly unknown. The current
literature using Markov Random Fields (MRFs) and
Conditional Random Fields (CRFs) often involves
specific algorithm design in which the modeling and
computing stages are studied in isolation. In this
paper, we propose a learning algorithm, auto-context.
Given a set of training images and their corresponding
label maps, we first learn a classifier on local image
patches. The discriminative probability (or
classification confidence) maps created by the learned
classifier are then used as context information, in
addition to the original image patches, to train a new
classifier. The algorithm then iterates until
convergence. Auto-context integrates low-level and
context information by fusing a large number of low-
level appearance features with context and implicit
shape information. The resulting discriminative
algorithm is general and easy to implement. Under
nearly the same parameter settings in training, we
apply the algorithm to three challenging vision
applications: foreground/background segregation,
human body configuration estimation, and scene
region labeling. Moreover, context also plays a very
important role in medical/brain images where the
anatomical structures are mostly constrained to
relatively fixed positions. With only some slight
changes resulting from using 3D instead of 2D
features, the auto-context algorithm applied to brain
MRI image segmentation is shown to outperform
state-of-the-art algorithms specifically designed for
this domain. Furthermore, the scope of the proposed
algorithm goes beyond image analysis and it has the
potential to be used for a wide variety of problems for
structured prediction problems.
17. CSMA protocol Mitigating This system is developed to show the descriptive java
Performance Degradation management of dreadful conditions in Congested
in Congested Sensor Sensor Networks. The dreadful conditions in sensor
Networks networks or any other wired networks will happen
when bandwidth differs from receiving and sending
points. The channel capacity of the network may not
be sufficient enough to handle the speed of packets
sent. In this system, we are presenting a view, how the
data can be sent through the congested channel and
also the safe delivery of the packets to the destination.
This System is developed using java swing technology
with jdk1.6. All the nodes are developed as swing
API‘s.Multiple API‘s form a sink to the destination. The
packets will be sent from Source to destination, via
sink. In the sink, a node will be made congested and
using channel capacity, the path of data will be
calculated. Based on the result of the calculation, the
congestion in the sink will be dissolved and data is set
free to the destination.This system is an application to
maintain the free flow of data in congested sensor
networks using Differentiated Routing Protocol and
Priority Queues, which maintain priority in data-types.
7. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
18. Feature Analysis and The definition of parameters is a crucial step in the Multimedia .net
Evaluation for Automatic development of a system for identifying emotions in
Emotion Identification in speech. Although there is no agreement on which are
Speech the best features for this task, it is generally accepted
that prosody carries most of the emotional
information. Most works in the field use some kind of
prosodic features, often in combination with spectral
and voice quality parametrizations. Nevertheless, no
systematic study has been done comparing these
features. This paper presents the analysis of the
characteristics of features derived from prosody,
spectral envelope, and voice quality as well as their
capability to discriminate emotions. In addition, early
fusion and late fusion techniques for combining
different information sources are evaluated. The
results of this analysis are validated with experimental
automatic emotion identification tests. Results suggest
that spectral envelope features outperform the
prosodic ones. Even when different parametrizations
are combined, the late fusion of long-term spectral
statistics with short-term spectral envelope
parameters provides an accuracy comparable to that
obtained when all parametrizations are combined.
19. Automatic Detection of Off- Identifying off-task behaviors in intelligent tutoring Learning .net
Task Behaviors in systems is a practical and challenging research topic. Technologie
Intelligent Tutoring This paper proposes a machine learning model that s
Systems with Machine can automatically detect students' off-task behaviors.
Learning Techniques The proposed model only utilizes the data available
from the log files that record students' actions within
the system. The model utilizes a set of time features,
performance features, and mouse movement features,
and is compared to 1) a model that only utilizes time
features and 2) a model that uses time and
performance features. Different students have
different types of behaviors; therefore, personalized
version of the proposed model is constructed and
compared to the corresponding nonpersonalized
version. In order to address data sparseness problem,
a robust Ridge Regression algorithm is utilized to
estimate model parameters. An extensive set of
experiment results demonstrates the power of using
multiple types of evidence, the personalized model,
and the robust Ridge Regression algorithm.
20. Web-Application Security: Here's a sobering thought for all managers responsible IT .net
From Reactive to Proactive for Web applications: Without proactive consideration
for an application's security, attackers can bypass
nearly all lower-layer security controls simply by
using the application in a way its developers didn't
envision. Learn how to address vulnerabilities
proactively and early on to avoid the devastating
consequences of a successful attack.
21. Trust and Reputation Trust and reputation management research is highly INTERNET .net
Management interdisciplinary, involving researchers from COMPUTING
networking and communication, data management
and information systems, e-commerce and service
computing, artificial intelligence, and game theory, as
8. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
well as the social sciences and evolutionary biology.
Trust and reputation management has played and will
continue to play an important role in Internet and
social computing systems and applications. This
special issue addresses key issues in the field, such as
representation, recommendation aggregation, and
attack-resilient reputation systems.
22. Multi-body Structure-and- An efficient and robust framework is proposed for Image .net
Motion Segmentation by two-view multiple structure-and-motion segmentation Processing
Branch-and-Bound Model of unknown number of rigid objects. The segmentation
Selection problem has three unknowns, namely the object
memberships, the corresponding fundamental
matrices, and the number of objects. To handle this
otherwise recursive problem, hypotheses for
fundamental matrices are generated through local
sampling. Once the hypotheses are available, a
combinatorial selection problem is formulated to
optimize a model selection cost which takes into
account the hypotheses likelihoods and the model
complexity. An explicit model for outliers is also added
for robust segmentation. The model selection cost is
minimized through the branch-and-bound technique
of combinatorial optimization. The proposed branch-
and-bound approach efficiently searches the solution
space and guaranties optimality over the current set of
hypotheses. The efficiency and the guarantee of
optimality of the method is due to its ability to reject
solutions without explicitly evaluating them. The
proposed approach was validated with synthetic data,
and segmentation results are presented for real
images.
23. Active Image Re ranking Image search reranking methods usually fail to .net
capture the user's intention when the query term is
ambiguous. Therefore, reranking with user
interactions, or active reranking, is highly demanded
to effectively improve the search performance. The
essential problem in active reranking is how to target
the user's intention. To complete this goal, this paper
presents a structural information based sample
selection strategy to reduce the user's labeling efforts.
Furthermore, to localize the user's intention in the
visual feature space, a novel local-global
discriminative dimension reduction algorithm is
proposed. In this algorithm, a submanifold is learned
by transferring the local geometry and the
discriminative information from the labelled images to
the whole (global) image database. Experiments on
both synthetic datasets and a real Web image search
dataset demonstrate the effectiveness of the proposed
active reranking scheme, including both the structural
information based active sample selection strategy
and the local-global discriminative dimension
reduction algorithm.
24. Content Based Image An innovative approach based on an evolutionary .net
Retrieval using PSO stochastic algorithm, namely the Particle Swarm
Optimizer (PSO), is proposed in this paper as a
9. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
solution to the problem of intelligent retrieval of
images in large databases. The problem is recast to an
optimization one, where a suitable cost function is
minimized through a customized PSO. Accordingly, the
relevance-feedback is used in order to exploit the
information of the user with the aim of both guiding
the particles inside the search space and dynamically
assigning different weights to the features.
25. Automatic Composition of This paper presents a novel approach for semantic .net
Semantic Web Services An web service composition based on traditional state
Enhanced State Space space search approach. We regard automatic web
Search Approach service composition problem as an AI problem-solving
problem and propose an enhanced state space search
approach toward web service composition domain.
This approach can not only be used for automatic
service composition, but also for general problem-
solving domain. In addition, in order to validate the
feasibility of our approach, a prototype system is
implemented.
26. Knowledge-first web Although semantic technologies aren't used in current .net
services an E-Government software systems on a large scale yet, they offer high
example potential to significantly improve the quality of
electronic services especially in the E-Government
domain. This paper therefore presents an approach
that not only incorporates semantic technologies but
allows to create E-Government services solely based
on semantic models. This multiplies the benefits of the
ontology modeling efforts, minimizes development
and maintenance time and costs, improves user
experience and enforces transparency.
27. The Applied Research of This paper firstly introduces the characteristics of the Cloud .net
Cloud Computing Platform current E-Learning, and then analyzes the concept and
Architecture In the E- characteristics of cloud computing, and describes the computing
Learning Area architecture of cloud computing platform; by
combining the characteristics of E-Learning and
learning from current major infrastructure approach
of cloud computing platform, this paper structures a
relatively complete set of integration and use in one of
the E-Learning platform, puts the cloud computing
platform apply to the study of E-Learning, and focus on
the application in order to improve the resources'
stability, balance and utilization; under the conditions,
this platform will meet the demand for the current
teaching and research activities, improve the greatest
value of the E-Learning.
28. Cloud Computing System Cloud computing provides people a way to share large .net
Based on Trusted mount of distributed resources belonging to different
Computing Platform organizations. That is a good way to share many kinds
of distributed resources, but it also makes security
problems more complicate and more important for
users than before. In this paper, we analyze some
security requirements in cloud computing
environment. Since the security problems both in
software and hardware, we provided a method to
build a trusted computing environment for cloud
10. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
computing by integrating the trusted computing
platform (TCP) into cloud computing system. We
propose a new prototype system, in which cloud
computing system is combined with Trusted Platform
Support Service (TSS) and TSS is based on Trusted
Platform Module (TPM). In this design, better effect
can be obtained in authentication, role based access
and data protection in cloud computing environment.
29.
IT Auditing to Assure a In this paper we discuss the evolvement of cloud .net
Secure Cloud Computing. computing paradigm and present a framework for
secure cloud computing through IT auditing. Our
approach is to establish a general framework using
checklists by following data flow and its lifecycle. The
checklists are made based on the cloud deployment
models and cloud services models. The contribution of
the paper is to understand the implication of cloud
computing and what is meant secure cloud computing
via IT auditing rather than propose a new
methodology and new technology to secure cloud
computing. Our holistic approach has strategic value
to those who are using or consider using cloud
computing because it addresses concerns such as
security, privacy and regulations and compliance.
30. Performance Evaluation of Advanced computing on cloud computing .net
Cloud Computing Offerings infrastructures can only become viable alternative for
the enterprise if these infrastructures can provide
proper levels of nonfunctional properties (NPFs). A
company that focuses on service-oriented
architectures (SOA) needs to know what configuration
would provide the proper levels for individual services
if they are deployed in the cloud. In this paper we
present an approach for performance evaluation of
cloud computing configurations. While cloud
computing providers assure certain service levels, this
it typically done for the platform and not for a
particular service instance. Our approach focuses on
NFPs of individual services and thereby provides a
more relevant and granular information. An
experimental evaluation in Amazon Elastic Compute
Cloud (EC2) verified our approach.
31. Providing Privacy People can only enjoy the full benefits of Cloud .net
Preserving in cloud computing if we can address the very real privacy and
computing security concerns that come along with storing
sensitive personal information in databases and
software scattered around the Internet. There are
many service provider in the internet, we can call each
service as a cloud, each cloud service will exchange
data with other cloud, so when the data is exchanged
between the clouds, there exist the problem of
disclosure of privacy. So the privacy disclosure
problem about individual or company is inevitably
exposed when releasing or sharing data in the cloud
service. Privacy is an important issue for cloud
computing, both in terms of legal compliance and user
11. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
trust, and needs to be considered at every phase of
design. Our paper provides some privacy preserving
technologies used in cloud computing services.
32. VEBEK: Virtual Energy- Designing cost-efficient, secure network protocols for Wireless .net
Based Encryption and Wireless Sensor Networks (WSNs) is a challenging Computing
Keying for Wireless Sensor problem because sensors are resource-limited
Networks wireless devices. Since the communication cost is the
most dominant factor in a sensor's energy
consumption, we introduce an energy-efficient Virtual
Energy-Based Encryption and Keying (VEBEK) scheme
for WSNs that significantly reduces the number of
transmissions needed for rekeying to avoid stale keys.
In addition to the goal of saving energy, minimal
transmission is imperative for some military
applications of WSNs where an adversary could be
monitoring the wireless spectrum. VEBEK is a secure
communication framework where sensed data is
encoded using a scheme based on a permutation code
generated via the RC4 encryption mechanism. The key
to the RC4 encryption mechanism dynamically
changes as a function of the residual virtual energy of
the sensor. Thus, a one-time dynamic key is employed
for one packet only and different keys are used for the
successive packets of the stream. The intermediate
nodes along the path to the sink are able to verify the
authenticity and integrity of the incoming packets
using a predicted value of the key generated by the
sender's virtual energy, thus requiring no need for
specific rekeying messages. VEBEK is able to efficiently
detect and filter false data injected into the network by
malicious outsiders. The VEBEK framework consists of
two operational modes (VEBEK-I and VEBEK-II), each
of which is optimal for different scenarios. In VEBEK-I,
each node monitors its one-hop neighbors where
VEBEK-II statistically monitors downstream nodes.
We have evaluated VEBEK's feasibility and
performance analytically and through simulations. Our
results show that VEBEK, without incurring
transmission overhead (increasing packet size or
sending control messages for rekeying), is able to
eliminate malicious data from the network in an
energy-efficient manner. We also show that our
framework performs be- - tter than other comparable
schemes in the literature with an overall 60-100
percent improvement in energy savings without the
assumption of a reliable medium access control layer.
33. Secure Data Collection in Compromised node and denial of service are two key .net
Wireless Sensor Networks attacks in wireless sensor networks (WSNs). In this
Using Randomized paper, we study data delivery mechanisms that can
Dispersive Routes with high probability circumvent black holes formed
by these attacks. We argue that classic multipath
routing approaches are vulnerable to such attacks,
mainly due to their deterministic nature. So once the
adversary acquires the routing algorithm, it can
compute the same routes known to the source, hence,
making all information sent over these routes
vulnerable to its attacks. In this paper, we develop
12. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
mechanisms that generate randomized multipath
routes. Under our designs, the routes taken by the ??
shares?? of different packets change over time. So even
if the routing algorithm becomes known to the
adversary, the adversary still cannot pinpoint the
routes traversed by each packet. Besides randomness,
the generated routes are also highly dispersive and
energy efficient, making them quite capable of
circumventing black holes. We analytically investigate
the security and energy performance of the proposed
schemes. We also formulate an optimization problem
to minimize the end-to-end energy consumption under
given security constraints. Extensive simulations are
conducted to verify the validity of our mechanisms.
34. Aging Bloom Filter with A Bloom filter is a simple but powerful data structure Data Mining .net
Two Active Buffers for that can check membership to a static set. As Bloom
Dynamic Sets. filters become more popular for network applications,
a membership query for a dynamic set is also required.
Some network applications require high-speed
processing of packets. For this purpose, Bloom filters
should reside in a fast and small memory, SRAM. In
this case, due to the limited memory size, stale data in
the Bloom filter should be deleted to make space for
new data. Namely the Bloom filter needs aging like
LRU caching. In this paper, we propose a new aging
scheme for Bloom filters. The proposed scheme
utilizes the memory space more efficiently than double
buffering, the current state of the art. We prove
theoretically that the proposed scheme outperforms
double buffering. We also perform experiments on real
Internet traces to verify the effectiveness of the
proposed scheme.
35. Bayesian Classifiers The Bayesian classifier is a fundamental classification .net
Programmed in SQL technique. In this work, we focus on programming
Bayesian classifiers in SQL. We introduce two
classifiers: naive Bayes and a classifier based on class
decomposition using K-means clustering. We consider
two complementary tasks: model computation and
scoring a data set. We study several layouts for tables
and several indexing alternatives. We analyze how to
transform equations into efficient SQL queries and
introduce several query optimizations. We conduct
experiments with real and synthetic data sets to
evaluate classification accuracy, query optimizations,
and scalability. Our Bayesian classifier is more
accurate than naive Bayes and decision trees. Distance
computation is significantly accelerated with
horizontal layout for tables, denormalization, and
pivoting. We also compare naive Bayes
implementations in SQL and C++: SQL is about four
times slower. Our Bayesian classifier in SQL achieves
high classification accuracy, can efficiently analyze
large data sets, and has linear scalability.
13. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
36. Using a web-based tool to Top-down process improvement approaches provide a java
define and implement high-level model of what the process of a software
software process development organisation should be. Such models are
improvement initiatives in based on the consensus of a designated working group
a small industrial setting on how software should be developed or maintained.
They are very useful in that they provide general
guidelines on where to start improving, and in which
order, to people who do not know how to do it.
However, the majority of models have only worked in
scenarios within large companies. The authors aim to
help small software development organisations adopt
an iterative approach by providing a process
improvement web-based tool. This study presents
research into a proposal which states that a small
organisation may use this tool to assess and improve
their software process, identifying and implementing a
set of agile project management practices that can be
strengthened using the CMMI-DEV 1.2 model as
reference.
37. An Online Monitoring
2 Web service technology aims to enable the Java
Approach for Web Service interoperation of heterogeneous systems and the
Requirements reuse of distributed functions in an unprecedented
(An Online Monitoring scale and has achieved significant success. There are
Approach for Web Service still, however, challenges to realize its full potential.
Requirements –web One of these challenges is to ensure the behaviour of
services(ME)) Web services consistent with their requirements.
Monitoring events that are relevant to
Web service requirements is, thus, an important
technique. This paper introduces an online monitoring
approach for Web service requirements. It includes a
pattern-based specification of service constraints that
correspond to service requirements, and a monitoring
model that covers five kinds of system events relevant
to client request, service response, application,
resource, and management, and a monitoring
framework in which different probes and agents
collect events and data that are sensitive to
requirements. The framework analyzes the collected
information against the prespecified constraints, so as
to evaluate the behaviour and use of Web
services. The prototype implementation and
experiments with a case study shows that our
approach is effective and flexible, and the monitoring
cost is affordable.
14. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
S.NO TITLE -2011 ABSTRACT DOMAIN PLATFORM
1. Exploiting Dynamic In recent years ad hoc parallel data processing has Parallel
Resource Allocation emerged to be one of the killer applications for Distribution
for Efficient Parallel Infrastructure-as-a-Service (IaaS) clouds. Major Cloud
Data Processing in computing companies have started to integrate
the Cloud frameworks for parallel data processing in their product
portfolio, making it easy for customers to access these
services and to deploy their programs. However, the
processing frameworks which are currently used have
been designed for static, homogeneous cluster setups
and disregard the particular nature of a cloud.
Consequently, the allocated compute resources may be
inadequate for big parts of the submitted job and
unnecessarily increase processing time and cost. In this
paper, we discuss the opportunities and challenges for
efficient parallel data processing in clouds and present
our research project Nephele. Nephele is the first data
processing framework to explicitly exploit the dynamic
resource allocation offered by today's IaaS clouds for
both, task scheduling and execution. Particular tasks of a
processing job can be assigned to different types of
virtual machines which are automatically instantiated
and terminated during the job execution. Based on this
new framework, we perform extended evaluations of
MapReduce-inspired processing jobs on an IaaS cloud
system and compare the results to the popular data
processing framework Hadoop.
2. Data integrity proofs Cloud computing has been envisioned as the de-facto Communicat
in cloud storage solution to the rising storage costs of IT Enterprises. ion System &
With the high costs of data storage devices as well as the network
rapid rate at which data is being generated it proves
costly for enterprises or individual users to frequently
update their hardware. Apart from reduction in storage
costs data outsourcing to the cloud also helps in reducing
the maintenance. Cloud storage moves the user’s data to
large data centers, which are remotely located, on which
user does not have any control. However, this unique
feature of the cloud poses many new security challenges
which need to be clearly understood and resolved. One of
the important concerns that need to be addressed is to
assure the customer of the integrity i.e. correctness of his
data in the cloud. As the data is physically not accessible
to the user the cloud should provide a way for the user to
check if the integrity of his data is maintained or is
compromised. In this paper we provide a scheme which
gives a proof of data integrity in the cloud which the
customer can employ to check the correctness of his data
in the cloud. This proof can be agreed upon by both the
cloud and the customer and can be incorporated in the
Service level agreement (SLA). This scheme ensures that
the storage at the client side is minimal which will be
beneficial for thin clients.
3. Efficient Computing In many applications, including location based services, Knowledge
of Range Aggregates queries are not precise. In this paper, we study the & data
against Uncertain problem of efficiently computing range aggregates in a engineering
Location Based multi-dimensional space when the query location is
uncertain. That is, for a set of data points P, an uncertain
15. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
Collections location based query Q with location described by a
probabilistic density function, we want to calculate the
aggregate information (e.g., count, average} and sum) of
the data points within distance gamma to Q with
probability at least theta. We propose novel, efficient
techniques to solve the problem based on a filtering-and-
verification framework. In particular, two novel filtering
techniques are proposed to effectively and efficiently
remove data points from verification. Finally, we show
that our techniques can be immediately extended to
solve the range query problem. Comprehensive
experiments conducted on both real and synthetic data
demonstrate the efficiency and scalability of our
techniques.
4. Exploring Natural phenomena show that many creatures form Knowledge
Application-Level large social groups and move in regular patterns. & Data
Semantics for Data However, previous works focus on finding the movement Engineering
Compression patterns of each single object or all objects. In this paper,
we first propose an efficient distributed mining
algorithm to jointly identify a group of moving objects
and discover their movement patterns in wireless sensor
networks. Afterward, we propose a compression
algorithm, called 2P2D, which exploits the obtained
group movement patterns to reduce the amount of
delivered data. The compression algorithm includes a
sequence merge and an entropy reduction phases. In the
sequence merge phase, we propose a Merge algorithm to
merge and compress the location data of a group of
moving objects. In the entropy reduction phase, we
formulate a Hit Item Replacement (HIR) problem and
propose a Replace algorithm that obtains the optimal
solution. Moreover, we devise three replacement rules
and derive the maximum compression ratio. The
experimental results show that the proposed
compression algorithm leverages the group movement
patterns to reduce the amount of delivered data
effectively and efficiently.
5. Improving Aggregate Recommender systems are becoming increasingly Knowledge
Recommendation important to individual users and businesses for & Data
Diversity Using providing personalized recommendations. However, Engineering
Ranking-Based while the majority of algorithms proposed in
Techniques recommender systems literature have focused on
improving recommendation accuracy (as exemplified by
the recent Netflix Prize competition), other important
aspects of recommendation quality, such as the diversity
of recommendations, have often been overlooked. In this
paper, we introduce and explore a number of item
ranking techniques that can generate recommendations
that have substantially higher aggregate diversity across
all users while maintaining comparable levels of
recommendation accuracy. Comprehensive empirical
evaluation consistently shows the diversity gains of the
proposed techniques using several real-world rating
datasets and different rating prediction algorithms.
6. Monitoring Service Business processes are increasingly distributed and Service
Systems from a open, making them prone to failure. Monitoring is, Computing
Language-Action therefore, an important concern not only for the
processes themselves but also for the services that
16. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
Perspective comprise these processes. We present a framework for
multilevel monitoring of these service systems. It
formalizes interaction protocols, policies, and
commitments that account for standard and extended
effects following the language-action perspective, and
allows specification of goals and monitors at varied
abstraction levels. We demonstrate how the framework
can be implemented and evaluate it with multiple
scenarios that include specifying and monitoring open-
service policy commitments.
7. One Size Does Not Fit With the emergence of the deep Web databases, Knowledge
All Towards User- searching in domains such as vehicles, real estate, etc. & data
and Query- has become a routine task. One of the problems in this engineering
Dependent Ranking context is ranking the results of a user query. Earlier
For Web Databases approaches for addressing this problem have used
frequencies of database values, query logs, and user
profiles. A common thread in most of these approaches is
that ranking is done in a user- and/or query-
independent manner. This paper proposes a novel
query- and user-dependent approach for ranking the
results of Web database queries. We present a ranking
model, based on two complementary notions of user and
query similarity, to derive a ranking function for a given
user query. This function is acquired from a sparse
workload comprising of several such ranking functions
derived for various user-query pairs. The proposed
model is based on the intuition that similar users display
comparable ranking preferences over the results of
similar queries. We define these similarities formally in
alternative ways and discuss their effectiveness both
analytically and experimentally over two distinct Web
databases.
8. Optimal Service Cloud applications that offer data management services Knowledge
Pricing for a Cloud are emerging. Such clouds support caching of data in & data
Cache order to provide quality query services. The users can engineering
query the cloud data, paying the price for the
infrastructure they use. Cloud management necessitates
an economy that manages the service of multiple users in
an efficient, but also, resource-economic way that allows
for cloud profit. Naturally, the maximization of cloud
profit given some guarantees for user satisfaction
presumes an appropriate price-demand model that
enables optimal pricing of query services. The model
should be plausible in that it reflects the correlation of
cache structures involved in the queries. Optimal pricing
is achieved based on a dynamic pricing scheme that
adapts to time changes. This paper proposes a novel
price-demand model designed for a cloud cache and a
dynamic pricing scheme for queries executed in the
cloud cache. The pricing solution employs a novel
method that estimates the correlations of the cache
services in an time-efficient manner. The experimental
study shows the efficiency of the solution.
9. A Personalized As a model for knowledge description and formalization, Knowledge
Ontology Model for ontologies are widely used to represent user profiles in & data
Web Information personalized web information gathering. However, when engineering
Gathering representing user profiles, many models have utilized
only knowledge from either a global knowledge base or a
17. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
user local information. In this paper, a personalized
ontology model is proposed for knowledge
representation and reasoning over user profiles. This
model learns ontological user profiles from both a world
knowledge base and user local instance repositories. The
ontology model is evaluated by comparing it against
benchmark models in web information gathering. The
results show that this ontology model is successful.
10. A Branch-and-Bound In branch-and-bound (B&B) schemes for solving a Computers
Algorithm for Solving minimization problem, a better lower bound could prune
the Multiprocessor many meaningless branches which do not lead to an
Scheduling Problem optimum solution. In this paper, we propose several
with Improved techniques to refine the lower bound on the makespan in
Lower Bounding the multiprocessor scheduling problem (MSP). The key
Techniques idea of our proposed method is to combine an efficient
quadratic-time algorithm for calculating the Fernández's
bound, which is known as the best lower bounding
technique proposed in the literature with two
improvements based on the notions of binary search and
recursion. The proposed method was implemented as a
part of a B&B algorithm for solving MSP, and was
evaluated experimentally. The result of experiments
indicates that the proposed method certainly improves
the performance of the underlying B&B scheme. In
particular, we found that it improves solutions generated
by conventional heuristic schemes for more than 20
percent of randomly generated instances, and for more
than 80 percent of instances, it could provide a
certification of optimality of the resulting solutions, even
when the execution time of the B&B scheme is limited by
one minute.
11. Design and Peer-to-peer (P2P) systems generate a major fraction of Computers
Evaluation of a Proxy the current Internet traffic, and they significantly
Cache for Peer-to- increase the load on ISP networks and the cost of
Peer Traffic running and connecting customer networks (e.g.,
universities and companies) to the Internet. To mitigate
these negative impacts, many previous works in the
literature have proposed caching of P2P traffic, but very
few (if any) have considered designing a caching system
to actually do it. This paper demonstrates that caching
P2P traffic is more complex than caching other Internet
traffic, and it needs several new algorithms and storage
systems. Then, the paper presents the design and
evaluation of a complete, running, proxy cache for P2P
traffic, called pCache. pCache transparently intercepts
and serves traffic from different P2P systems. A new
storage system is proposed and implemented in pCache.
This storage system is optimized for storing P2P traffic,
and it is shown to outperform other storage systems. In
addition, a new algorithm to infer the information
required to store and serve P2P traffic by the cache is
proposed. Furthermore, extensive experiments to
evaluate all aspects of pCache using actual
implementation and real P2P traffic are presented.
12. Robust Feature Feature selection often aims to select a compact feature Computation
Selection for subset to build a pattern classifier with reduced al Biology
Microarray Data complexity, so as to achieve improved classification and
Based on performance. From the perspective of pattern analysis, Bioinformati
producing stable or robust solution is also a desired cs
18. #241/85, 4th floor, Rangarajapuram main road, Kodambakkam (Power House) Chennai 600024
http://www.ingenioustech.in/ , enquiry@ingenioustech.in, 08428302179 / 044-42046028
Multicriterion Fusion property of a feature selection algorithm. However, the
issue of robustness is often overlooked in feature
selection. In this study, we analyze the robustness issue
existing in feature selection for high-dimensional and
small-sized gene-expression data, and propose to
improve robustness of feature selection algorithm by
using multiple feature selection evaluation criteria.
Based on this idea, a multicriterion fusion-based
recursive feature elimination (MCF-RFE) algorithm is
developed with the goal of improving both classification
performance and stability of feature selection results.
Experimental studies on five gene-expression data sets
show that the MCF-RFE algorithm outperforms the
commonly used benchmark feature selection algorithm
SVM-RFE.
13. Image-Based Surface Emerging technologies for structure matching based on Computation
Matching Algorithm surface descriptions have demonstrated their al Biology
Oriented to effectiveness in many research fields. In particular, they and
Structural Biology can be successfully applied to in silico studies of Bioinformati
structural biology. Protein activities, in fact, are related cs
to the external characteristics of these macromolecules
and the ability to match surfaces can be important to
infer information about their possible functions and
interactions. In this work, we present a surface-matching
algorithm, based on encoding the outer morphology of
proteins in images of local description, which allows us
to establish point-to-point correlations among
macromolecular surfaces using image-processing
functions. Discarding methods relying on biological
analysis of atomic structures and expensive
computational approaches based on energetic studies,
this algorithm can successfully be used for
macromolecular recognition by employing local surface
features. Results demonstrate that the proposed
algorithm can be employed both to identify surface
similarities in context of macromolecular functional
analysis and to screen possible protein interactions to
predict pairing capability
14. Iris matching using Iris recognition is one of the most widely used biometric Computer
multi-dimensional technique for personal identification. This identification Vision, IET
artificial neural is achieved in this work by using the concept that, the iris
network patterns are statistically unique and suitable for
biometric measurements. In this study, a novel method
of recognition of these patterns of an iris is considered
by using a multidimensional artificial neural network.
The proposed technique has the distinct advantage of
using the entire resized iris as an input at once. It is
capable of excellent pattern recognition properties as the
iris texture is unique for every person used for
recognition. The system is trained and tested using two
publicly available databases (CASIA and UBIRIS). The
proposed approach shows significant promise and
potential for improvements, compared with the other
conventional matching techniques with regard to time
and efficiency of results.
15. Real-time tracking Many vision problems require fast and accurate tracking Computer
using A* heuristic of objects in dynamic scenes. In this study, we propose Vision, IET
search and template an A* search algorithm through the space of
transformations for computing fast target 2D motion.