SlideShare ist ein Scribd-Unternehmen logo
1 von 6
International Journal of Engineering Research and Development
e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com
Volume 7, Issue 4 (May 2013), PP.03-08
3
Search Log Publishing With Improved Utility Using
Confess Algorithm
S. Belinsha1
, Mr.A.P.V.Raghavendra2
1
Department of Computer Science and Engineering, VSB Engineering College,Karur
2
Department of Computer Science and Engineering, VSB Engineering College,Karur
Abstract:- Search engines are being widely used by the web users. The search engine companies are
concerned to produce best search results. Search logs are the records which records the interactions
between the user and the search engine. Various search patterns, user’s behaviors can be analyzed from
these logs, which will help to enhance the search results. Publishing these search logs to third party for
analysis is a privacy issue. Zealous algorithm of filtering the frequent search items in the search log
looses its utility in the course of providing privacy. The proposed confess algorithm extends the work
by qualifying the infrequent search items in the log which tends to increase the utility of the search log
by preserving the privacy. Confess algorithm involves qualifying the infrequent keywords, URL clicks
in the search log and publishing it along with the frequent items.
Keywords:- utility, information service, privacy, search logs, search behaviour, infrequent items,
search items, threshold
I. INTRODUCTION
Web stores large amount of information. The information is retrieved by means of various techniques
which termed as web mining. Privacy preservation is the hot topic in the world of web. Web mining involves
web structure mining, web content mining and web usage mining. Analyzing and studying the search log falls
under the category of web usage mining. Search logs are confined with the search engines. Search engines are
the applications which support users to browse the web in an efficient way. Nowadays web users are more
dependent on search engines to access the web. The search engine companies bend more to produce best search
results to the users. Search logs are the record of interactions between the users and the search engine. It holds
the data like the user id, search keywords, URL (Uniform Resource Locator) clicks, date and time of search.
Publishing search log can be done in two ways: Providing the log to the third party and deploying the log for the
search engine functions. The information in the log supports the analysis of the user’s search behaviors and
patterns which helps to enhance the search results. Analyzing the search log is performed by the research
community. When these logs are provided to third party it should provide a privacy guarantee to the users of the
search logs. When privacy is focused, the utility i.e. the number of items released in the log, is decreased as it
involves elimination of more records.
When the AOL (American On Line) search log release is concerned, the log was released with
replacing the user id with the random numbers[1]. The privacy factor compromised with more utility. Hence
there is always a trade of between privacy and utility. So holding back the user’s identity alone does not
guarantee privacy. The user’s identity can also be revealed by the formation of queries and the link followed by
the user. The keywords also may involve the sensitive information like social security number, credit card
number and also certain demographic information. Hence the focus has to be made on these items and strong
strategies have to be followed to release the keywords formed by the users in the search log.
Earlier work involved the release of the logs with replacing the user identity with random numbers [2]. But this
was not promising one because of less privacy concern and is prone to background linkage attack [1]. Also by
the keywords formed by the users, the user’s identity can be revealed. The AOL search log release stands as an
example to this case. It released the search logs of several users by replacing with the random id but the public
were able to identify certain users, by their formation of queries. Later the work was extended to anonymization
[2], where the similar items were grouped and released. Achieving k-anonymity, l-diversity are some of the
privacy preserving techniques used. The dilemma in those techniques was that it was prone to background
knowledge attack [1]. The crucial effect produced as a result was that it lost the uniqueness of the user’s search.
The same effect was the case in generalization techniques also.
Zealous algorithm [1] was proposed to release the frequent items in the log by two threshold
framework. The frequent queries are more privacy promising. A keyword may become frequent when it is a
common public interest and when it is published it provides the less chance of identifying the user. Publishing
Search Log Publishing With Improved Utility Using Confess Algorithm
4
the frequent items alone will not contribute to the utility of the log further certain infrequent items also must be
considered. In practical, the search log may contain less frequent items than several infrequent items. The
infrequent items may have more probability of identifying an user. But there exists some infrequent queries
which are of public interest and relevant to the frequent query. Hence the confess algorithm tries to find out
such keywords and their corresponding URL click values and publishes it in the search log. To qualify the
infrequent keywords and URL clicks in the log, separate qualifying strategies are needed to be formulated.
Hence different qualifying constraints are set to qualify the keywords and the URL clicks.
The confess log obtained is applied to serve the search engine functions such as providing query
suggestion, query substitution. With the results the performance is studied and evaluations are made. The
confess log publishing strategy is also applied to the search engines and the effectiveness was studied in
comparison with the zealous algorithm. The zealous and the confess log were compared in terms of the average
number of items published in log.
II. ZEALOUS ALGORITHM
The Zealous algorithm uses a two phase framework to discover the frequent items in the log and
finally publishes it. To discover the frequent items, the Zealous algorithm uses two threshold values. The first
threshold value is set based on the number of user contributions in the log. The Laplacian noise is added to the
first set threshold value and the items are filtered by the set values. The addition of noise is to divert the
attackers and produce a non-exact statistics [4]. By this method of finding the frequent items, the result log
achieves probabilistic differential privacy. The main objective of Zealous algorithm is to figure out the frequent
items in the log. The Zealous algorithm is applied to a sample search log collected from a local search engine to
the items in the log like keywords and URL values. The log contained more than 200 entries with 58 users. The
Zealous algorithm was applied to the log with the threshold values in the table.
Table I: Keyword log of Zealous
The above are the keywords which have passed the filtration of the two phase framework. These
keywords are identified as frequent keywords. Similarly it identifies the frequent URL clicks in the log by the
two threshold values.
Table II : URL log of Zealous
However, Zealous algorithm leaves out the infrequent keywords in the log. However setting upon the
threshold value is a challenging task. But in a search log, there will be several infrequent items. The infrequent
item which has no possibility of revealing an user’s identity has to be identified and it has to be published.
Hence confess is proposed to qualify such infrequent items in the log.
III. CONFESS ALGORITHM
The confess algorithm follows the Zealous algorithm to trace out the frequent items. It isolates the
frequent and the infrequent items and the further processing is done to qualify the infrequent items. The Zealous
algorithm uses a two phase threshold framework to identify the frequent items. The infrequent items are then
retrieved from the log and the following constraints are checked against the items like keyword and URL clicks.
The two items considered to be qualified are the keywords and the URL click as they bind more user’s
information.
Search Log Publishing With Improved Utility Using Confess Algorithm
5
A. Qualifying the keyword
The keywords are the prime input of the user through which the user explores his needs in the web.
The keywords formed by the user reveal more private information about the users. This will be a gold mine for
the researchers to know the user’s identity. So several strategies are formulated to qualify the keywords that are
privacy promising [5].
1)Profile information: The users are registered before performing the search. The users have to provide certain
mandatory information for the registration. The infrequent queries are initially checked with the profile
information to check whether it contain any sensitive data. If so, then they are not used for further processing.
Consider the keyword 07480433 of a user. This keyword contains the social security number, which is likely to
reveal the identity of an user. This is identified by comparing the items with the profile information registered
by the user. In case, the keyword contains the profile information given by the user, then the keyword is not
qualified. In this way, if the keyword contains the information like name, date of birth, phone numbers, social
security numbers, address information, they can be identified and prohibited from publishing.
2) Sub keyword checking: The keywords formed by different users are different and holds user’s uniqueness.
The infrequent keyword is compared with the frequent keyword to find there is any sub keyword. If any such
sub keyword is found in the infrequent keyword, then the keyword is qualified.
Consider the keyword “lecture notes about search logs” is the frequent keyword as discovered by the Zealous
algorithm. The keyword “about search logs” is an infrequent keyword. But it is a sub keyword of the frequent
item. If such infrequent item exists then those keywords are qualified to be published. This may improve the
addition of useful entries in the log.
B. Qualifying the URL clicks
URL are the data which helps to know the location of a resource in the web. The URL clicks are the
important item in the log, which points out the user’s visiting of the web pages. The keywords and URL clicks
together can lead to identifying an user. Hence certain constraints are set to qualify the URL clicks.
1) URL shortening: The URL(Uniform Resource Locator) reveals the location of a resource in the web
environment. Normally an URL contains the fields like protocol, authority, filename, host, path, port. The
complete URL of an user click is likely to reveal the user’s identity and hence the attributes like filename, path
are removed. This procedure would conceal the exact visit of the user.
Consider the URL click,
https://developer.cebv.in/search-appliance/document /50/help_mini/status_log, is shortened as
“https://develepor.cbev.in. These shortening of the URL provide a less information about the page visited.
Sometimes revealing the complete URL value would identify an user. This is done to preserve the privacy.
2)Multiple visit to same URL: A user obtains several search results for the keyword provided for searching. The
user chooses the link appropriate to his search intension. The several links chosen by the user may point to the
same URL. This reveals that the user finds the information in that page which satisfies their need.
Consider the keyword, exam results in the log. The URL clicked by the user from the search results are,
http://www.results.in/colleges/BEresults.html
http://www.results.in/colleges/MCAres.html
http://www.results.in/colleges/MEresutlts.html
http://www.results.in/colloges/MBAres.html
The above clicks of the user reveal that he finds the intended content on the web page
http://www.chennairesults.in.The mentioned URL of the page is then qualified and is included in the published
log. When multiple link pointing an URL is listed in the search engine showcase that it is a prevalent page
which is offering more beneficial information regarding the input keyword and hence it can also be privacy
promising.
3)The URL with the keyword: The user searches by the keyword and obtains search results. Probably the URL
chosen by the user may contain the keyword as its sub term. This denotes that it was a relevant click by the user.
Such URLs can be included in the published log.
Consider the keyword, exam results is in the search log. The URL clicked by the user is http://www.examinfo.in
then this URL is added in the published log. The URL containing the keywords which is chosen by the user, i.e.
the entry in the log, showcase that the web page is of common interest. This highly depends on the user’s way
of providing the keyword and following the links in the result.
Search Log Publishing With Improved Utility Using Confess Algorithm
6
4)URL of top ranked pages: The selection of the link or the page of the user for a keyword from the search
results may be due to various intensions. When the clicked page is one of the top ranked page, then the URL of
the page can be published. The frequently visited page of an user is also considered to be published in the log.
The top ranked pages are safe enough to be published in the log[1].
By the above constraints, the infrequent URL clicks and keywords of the users are qualified and
published in the log which intends to improve the utility of the published log. The confess algorithm is applied
to the keywords and the URL clicks of the several users in the search log.
IV. RESULTS
The following tables depicts the results produced by the confess algorithm on the search log
which was used up by zealous algorithm.
Table III : Keyword log of Confess
The above is the keyword log produced as the result of applying confess algorithm of finding the
infrequent items. It can be noted that the keywords which are qualified is the part of the frequent keyword.
Releasing such keyword, would improve the utility as the log will contain more entries when published.
Table IV: URL log of Confess
The above log produces the qualified infrequent URL clicks along with the frequent URLs. After
qualifying the items in the search log i.e. keywords and URL clicks, they are compared with the entries in the
search log. The entries with the qualified keyword, URL click, date and time of the users.
Table V : Portion of the search log after qualification of the items
Search Log Publishing With Improved Utility Using Confess Algorithm
7
The above log is the portion of the search log after qualification. The log contains User-
id(U),Keyword(K),URL-click(U) and the Timestamp(T). The log retains the user’s id to carry the uniqueness of
the each users in the log. If user’s id is eliminated it would loose various session information because the user’s
uniqueness will not be obvious
V. COMPARATIVE STUDY
The performance of the confess algorithm is analyzed through various parameters like response time,
average number of items published in the log. Then the proposed confess algorithm is compared with the
zealous algorithm to swot up the performance in terms of utility produced by the log.
The below statistics show the average number of keywords published in the zealous log and the confess log.
The average number of keyword(Nk) is the ratio of the number of items released in the log to the total number
of items in the original log. To perform this study various experimental search logs are considered.
Table - 1.6 : Comparison with average number of keywords
With the above statistics the graph is generated as below.
Figure - 1.1 : Comparison with average number of keywords
It can be inferred that the confess keyword log outputs more keywords when compared to zealous logs
and at some instance, the average keywords produced is almost equal. This is highly probabilistic because it
depends on the user’s intention of forming keywords.
The below statistics show the average number of keywords published in the zealous log and the
confess log. The average number of url-click(Nu) in the log is the ratio of the number of items in the published
log to the number of items in the original unprocessed log. This metric considered for the study.
Table – 1.7 : Comparison with average number of URL clicks
With these statistical data a graph is generated below.
Figure - 1.2 : Comparison with average number of keywords
Search Log Publishing With Improved Utility Using Confess Algorithm
8
It can be inferred that the confess log also outputs more URL clicks than zealous log, that which are
maintaining the privacy of the user. It can be noticed that the URL log produces more utility than the keyword
log, as it qualifies more URL clicks.
From the above studies, it can be inferred that qualifying infrequent items in the log would enhance the
utility of the published log. The resultant log can be deployed to support various search engine functions which
would reduce the time complexity in the usage of the log when compared to the original unprocessed search
log.
VI. APPLICATIONS
As confess log produces more utility in the published log, the log can be applied for various search
engine functions like index caching, query substitution, query suggestions. These activities must be processed
quickly to give better search experience for the users. The time consumption reduces when confess log is
consumed rather than the original log. The utility of the log will be increased than that of the Zealous log, and
helps to achieve privacy also.
VII. CONCLUSION
By the above studies, it can be inferred that the average number of items released is more. Hence the
utility of the search log is improved by including the qualified infrequent items from the log. Also publishing
those infrequent items will not disturb the privacy of the users as it has to satisfy various constraints which are
privacy promising.
VIII. FUTURE ENHANCEMENT
Several better qualifying criteria can be set to qualifying the infrequent keywords and URL clicks.
Also the work can be extended in setting constraints for the unregistered in the search engines whose However
challenges still lies in discovering the frequent items in search logs. Efficient method to discover the frequent
items can also be formulated.
REFERENCES
[1]. Michaela Gotz, Ashwin Machanavajjnala, Guozhang Wang, Xiaokui Xiao and Johannes Gehreke,
”Publishing search logs – A comparative study of privacy guarentees”, IEEE transactions on
knowledge and data engineering,Vol.24, No.3, March 2012.
[2]. E. Adar, “User 4xxxxx9: Anonymizing Query Logs” Proc. World Wide Web (WWW) Workshop Query
Log Analysis, 2007.
[3]. A. Korolova, K. Kenthapadi, N. Mishra, and A. Ntoulas, “Releasing Search Queries and Clicks
Privately,” Proc. 18th Int’l Conf. World Wide Web (WWW), 2009
[4]. C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor, “Our Data Ourselves: Privacy via
Distributed Noise Generation” Proc. Ann. Int’l Conf. Theory and Applications of Cryptographic
Techniques (EUROCRYPT), 2006.
[5]. V. S. Iyengar, “Transforming data to satisfy privacy constraints” in ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining, 2002.

Weitere ähnliche Inhalte

Was ist angesagt?

Compare & Contrast Using The Web To Discover Comparable Cases For News Stories
Compare & Contrast Using The Web To Discover Comparable Cases For News StoriesCompare & Contrast Using The Web To Discover Comparable Cases For News Stories
Compare & Contrast Using The Web To Discover Comparable Cases For News Stories
Jason Yang
 

Was ist angesagt? (18)

IRJET-Deep Web Crawling Efficiently using Dynamic Focused Web Crawler
IRJET-Deep Web Crawling Efficiently using Dynamic Focused Web CrawlerIRJET-Deep Web Crawling Efficiently using Dynamic Focused Web Crawler
IRJET-Deep Web Crawling Efficiently using Dynamic Focused Web Crawler
 
PageRank algorithm and its variations: A Survey report
PageRank algorithm and its variations: A Survey reportPageRank algorithm and its variations: A Survey report
PageRank algorithm and its variations: A Survey report
 
Semantic based Enterprise Search Solution in Networking Domain
Semantic based Enterprise Search Solution in Networking DomainSemantic based Enterprise Search Solution in Networking Domain
Semantic based Enterprise Search Solution in Networking Domain
 
Implemenation of Enhancing Information Retrieval Using Integration of Invisib...
Implemenation of Enhancing Information Retrieval Using Integration of Invisib...Implemenation of Enhancing Information Retrieval Using Integration of Invisib...
Implemenation of Enhancing Information Retrieval Using Integration of Invisib...
 
Entity linking with a knowledge base issues techniques and solutions
Entity linking with a knowledge base issues techniques and solutionsEntity linking with a knowledge base issues techniques and solutions
Entity linking with a knowledge base issues techniques and solutions
 
Entity linking with a knowledge base issues techniques and solutions
Entity linking with a knowledge base issues techniques and solutionsEntity linking with a knowledge base issues techniques and solutions
Entity linking with a knowledge base issues techniques and solutions
 
IRJET - Detection and Prevention of Phishing Websites using Machine Learning ...
IRJET - Detection and Prevention of Phishing Websites using Machine Learning ...IRJET - Detection and Prevention of Phishing Websites using Machine Learning ...
IRJET - Detection and Prevention of Phishing Websites using Machine Learning ...
 
WEB SEARCH ENGINE BASED SEMANTIC SIMILARITY MEASURE BETWEEN WORDS USING PATTE...
WEB SEARCH ENGINE BASED SEMANTIC SIMILARITY MEASURE BETWEEN WORDS USING PATTE...WEB SEARCH ENGINE BASED SEMANTIC SIMILARITY MEASURE BETWEEN WORDS USING PATTE...
WEB SEARCH ENGINE BASED SEMANTIC SIMILARITY MEASURE BETWEEN WORDS USING PATTE...
 
An Ontology-based Technique for Online Profile Resolution
An Ontology-based Technique for Online Profile ResolutionAn Ontology-based Technique for Online Profile Resolution
An Ontology-based Technique for Online Profile Resolution
 
IRJET - Building Your Own Search Engine
IRJET -  	  Building Your Own Search EngineIRJET -  	  Building Your Own Search Engine
IRJET - Building Your Own Search Engine
 
TEXT ANALYZER
TEXT ANALYZER TEXT ANALYZER
TEXT ANALYZER
 
IRJET- An Efficient Ranked Multi-Keyword Search for Multiple Data Owners Over...
IRJET- An Efficient Ranked Multi-Keyword Search for Multiple Data Owners Over...IRJET- An Efficient Ranked Multi-Keyword Search for Multiple Data Owners Over...
IRJET- An Efficient Ranked Multi-Keyword Search for Multiple Data Owners Over...
 
Comparison of various page Rank Algorithms
Comparison of various page Rank AlgorithmsComparison of various page Rank Algorithms
Comparison of various page Rank Algorithms
 
Computing semantic similarity measure between words using web search engine
Computing semantic similarity measure between words using web search engineComputing semantic similarity measure between words using web search engine
Computing semantic similarity measure between words using web search engine
 
PDMLP: PHISHING DETECTION USING MULTILAYER PERCEPTRON
PDMLP: PHISHING DETECTION USING MULTILAYER PERCEPTRONPDMLP: PHISHING DETECTION USING MULTILAYER PERCEPTRON
PDMLP: PHISHING DETECTION USING MULTILAYER PERCEPTRON
 
IRJET- Hosting NLP based Chatbot on AWS Cloud using Docker
IRJET-  	  Hosting NLP based Chatbot on AWS Cloud using DockerIRJET-  	  Hosting NLP based Chatbot on AWS Cloud using Docker
IRJET- Hosting NLP based Chatbot on AWS Cloud using Docker
 
[IJET V2I5P15] Authors: V.Preethi, G.Velmayil
[IJET V2I5P15] Authors: V.Preethi, G.Velmayil[IJET V2I5P15] Authors: V.Preethi, G.Velmayil
[IJET V2I5P15] Authors: V.Preethi, G.Velmayil
 
Compare & Contrast Using The Web To Discover Comparable Cases For News Stories
Compare & Contrast Using The Web To Discover Comparable Cases For News StoriesCompare & Contrast Using The Web To Discover Comparable Cases For News Stories
Compare & Contrast Using The Web To Discover Comparable Cases For News Stories
 

Andere mochten auch (8)

IBDEC - Oficina Empresarial - Como vencer a concorrência gastando 1,99 por mês
IBDEC - Oficina Empresarial - Como vencer a concorrência gastando 1,99 por mêsIBDEC - Oficina Empresarial - Como vencer a concorrência gastando 1,99 por mês
IBDEC - Oficina Empresarial - Como vencer a concorrência gastando 1,99 por mês
 
1 st sgt symposium
1 st sgt symposium1 st sgt symposium
1 st sgt symposium
 
Patricia palacios castrillon
Patricia palacios castrillonPatricia palacios castrillon
Patricia palacios castrillon
 
Análisis
AnálisisAnálisis
Análisis
 
Sat_Sam's CV [pdf] BarrSilvano
Sat_Sam's CV [pdf] BarrSilvanoSat_Sam's CV [pdf] BarrSilvano
Sat_Sam's CV [pdf] BarrSilvano
 
Iv ebie conversatorio-carlos luis gómez valderrama
Iv ebie conversatorio-carlos luis gómez valderramaIv ebie conversatorio-carlos luis gómez valderrama
Iv ebie conversatorio-carlos luis gómez valderrama
 
Aula 2-introdução-a-química-de-alimentos
Aula 2-introdução-a-química-de-alimentosAula 2-introdução-a-química-de-alimentos
Aula 2-introdução-a-química-de-alimentos
 
Modulo insercion laboral 1
Modulo insercion laboral 1Modulo insercion laboral 1
Modulo insercion laboral 1
 

Ähnlich wie B07040308

IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
ijceronline
 
Enhanced Web Usage Mining Using Fuzzy Clustering and Collaborative Filtering ...
Enhanced Web Usage Mining Using Fuzzy Clustering and Collaborative Filtering ...Enhanced Web Usage Mining Using Fuzzy Clustering and Collaborative Filtering ...
Enhanced Web Usage Mining Using Fuzzy Clustering and Collaborative Filtering ...
inventionjournals
 
patent search paradigm (ieee)
patent search paradigm (ieee)patent search paradigm (ieee)
patent search paradigm (ieee)
Prateek Jaiswal
 
Logminingsurvey
LogminingsurveyLogminingsurvey
Logminingsurvey
drewz lin
 

Ähnlich wie B07040308 (20)

IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
IRJET- A Novel Technique for Inferring User Search using Feedback Sessions
IRJET- A Novel Technique for Inferring User Search using Feedback SessionsIRJET- A Novel Technique for Inferring User Search using Feedback Sessions
IRJET- A Novel Technique for Inferring User Search using Feedback Sessions
 
Enhanced Web Usage Mining Using Fuzzy Clustering and Collaborative Filtering ...
Enhanced Web Usage Mining Using Fuzzy Clustering and Collaborative Filtering ...Enhanced Web Usage Mining Using Fuzzy Clustering and Collaborative Filtering ...
Enhanced Web Usage Mining Using Fuzzy Clustering and Collaborative Filtering ...
 
professional fuzzy type-ahead rummage around in xml type-ahead search techni...
professional fuzzy type-ahead rummage around in xml  type-ahead search techni...professional fuzzy type-ahead rummage around in xml  type-ahead search techni...
professional fuzzy type-ahead rummage around in xml type-ahead search techni...
 
Lec1
Lec1Lec1
Lec1
 
Lec1,2
Lec1,2Lec1,2
Lec1,2
 
Ac02411221125
Ac02411221125Ac02411221125
Ac02411221125
 
Detection of Behavior using Machine Learning
Detection of Behavior using Machine LearningDetection of Behavior using Machine Learning
Detection of Behavior using Machine Learning
 
IJRET : International Journal of Research in Engineering and TechnologyImprov...
IJRET : International Journal of Research in Engineering and TechnologyImprov...IJRET : International Journal of Research in Engineering and TechnologyImprov...
IJRET : International Journal of Research in Engineering and TechnologyImprov...
 
EaZSearch
EaZSearch EaZSearch
EaZSearch
 
patent search paradigm (ieee)
patent search paradigm (ieee)patent search paradigm (ieee)
patent search paradigm (ieee)
 
User search goal inference and feedback session using fast generalized – fuzz...
User search goal inference and feedback session using fast generalized – fuzz...User search goal inference and feedback session using fast generalized – fuzz...
User search goal inference and feedback session using fast generalized – fuzz...
 
Achieving Privacy in Publishing Search logs
Achieving Privacy in Publishing Search logsAchieving Privacy in Publishing Search logs
Achieving Privacy in Publishing Search logs
 
G017415465
G017415465G017415465
G017415465
 
Logminingsurvey
LogminingsurveyLogminingsurvey
Logminingsurvey
 
IRJET-Model for semantic processing in information retrieval systems
IRJET-Model for semantic processing in information retrieval systemsIRJET-Model for semantic processing in information retrieval systems
IRJET-Model for semantic processing in information retrieval systems
 
IRJET- Identification of Clone Attacks in Social Networking Sites
IRJET-  	  Identification of Clone Attacks in Social Networking SitesIRJET-  	  Identification of Clone Attacks in Social Networking Sites
IRJET- Identification of Clone Attacks in Social Networking Sites
 
IRJET- An Effective Analysis of Anti Troll System using Artificial Intell...
IRJET-  	  An Effective Analysis of Anti Troll System using Artificial Intell...IRJET-  	  An Effective Analysis of Anti Troll System using Artificial Intell...
IRJET- An Effective Analysis of Anti Troll System using Artificial Intell...
 
Perception Determined Constructing Algorithm for Document Clustering
Perception Determined Constructing Algorithm for Document ClusteringPerception Determined Constructing Algorithm for Document Clustering
Perception Determined Constructing Algorithm for Document Clustering
 
Effective Performance of Information Retrieval on Web by Using Web Crawling  
Effective Performance of Information Retrieval on Web by Using Web Crawling  Effective Performance of Information Retrieval on Web by Using Web Crawling  
Effective Performance of Information Retrieval on Web by Using Web Crawling  
 

Mehr von IJERD Editor

A Novel Method for Prevention of Bandwidth Distributed Denial of Service Attacks
A Novel Method for Prevention of Bandwidth Distributed Denial of Service AttacksA Novel Method for Prevention of Bandwidth Distributed Denial of Service Attacks
A Novel Method for Prevention of Bandwidth Distributed Denial of Service Attacks
IJERD Editor
 
Gold prospecting using Remote Sensing ‘A case study of Sudan’
Gold prospecting using Remote Sensing ‘A case study of Sudan’Gold prospecting using Remote Sensing ‘A case study of Sudan’
Gold prospecting using Remote Sensing ‘A case study of Sudan’
IJERD Editor
 
Gesture Gaming on the World Wide Web Using an Ordinary Web Camera
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraGesture Gaming on the World Wide Web Using an Ordinary Web Camera
Gesture Gaming on the World Wide Web Using an Ordinary Web Camera
IJERD Editor
 
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...
IJERD Editor
 
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...
IJERD Editor
 

Mehr von IJERD Editor (20)

A Novel Method for Prevention of Bandwidth Distributed Denial of Service Attacks
A Novel Method for Prevention of Bandwidth Distributed Denial of Service AttacksA Novel Method for Prevention of Bandwidth Distributed Denial of Service Attacks
A Novel Method for Prevention of Bandwidth Distributed Denial of Service Attacks
 
MEMS MICROPHONE INTERFACE
MEMS MICROPHONE INTERFACEMEMS MICROPHONE INTERFACE
MEMS MICROPHONE INTERFACE
 
Influence of tensile behaviour of slab on the structural Behaviour of shear c...
Influence of tensile behaviour of slab on the structural Behaviour of shear c...Influence of tensile behaviour of slab on the structural Behaviour of shear c...
Influence of tensile behaviour of slab on the structural Behaviour of shear c...
 
Gold prospecting using Remote Sensing ‘A case study of Sudan’
Gold prospecting using Remote Sensing ‘A case study of Sudan’Gold prospecting using Remote Sensing ‘A case study of Sudan’
Gold prospecting using Remote Sensing ‘A case study of Sudan’
 
Reducing Corrosion Rate by Welding Design
Reducing Corrosion Rate by Welding DesignReducing Corrosion Rate by Welding Design
Reducing Corrosion Rate by Welding Design
 
Router 1X3 – RTL Design and Verification
Router 1X3 – RTL Design and VerificationRouter 1X3 – RTL Design and Verification
Router 1X3 – RTL Design and Verification
 
Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...
Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...
Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...
 
Mitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVR
Mitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVRMitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVR
Mitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVR
 
Study on the Fused Deposition Modelling In Additive Manufacturing
Study on the Fused Deposition Modelling In Additive ManufacturingStudy on the Fused Deposition Modelling In Additive Manufacturing
Study on the Fused Deposition Modelling In Additive Manufacturing
 
Spyware triggering system by particular string value
Spyware triggering system by particular string valueSpyware triggering system by particular string value
Spyware triggering system by particular string value
 
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...
 
Secure Image Transmission for Cloud Storage System Using Hybrid Scheme
Secure Image Transmission for Cloud Storage System Using Hybrid SchemeSecure Image Transmission for Cloud Storage System Using Hybrid Scheme
Secure Image Transmission for Cloud Storage System Using Hybrid Scheme
 
Application of Buckley-Leverett Equation in Modeling the Radius of Invasion i...
Application of Buckley-Leverett Equation in Modeling the Radius of Invasion i...Application of Buckley-Leverett Equation in Modeling the Radius of Invasion i...
Application of Buckley-Leverett Equation in Modeling the Radius of Invasion i...
 
Gesture Gaming on the World Wide Web Using an Ordinary Web Camera
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraGesture Gaming on the World Wide Web Using an Ordinary Web Camera
Gesture Gaming on the World Wide Web Using an Ordinary Web Camera
 
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...
 
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...
 
Moon-bounce: A Boon for VHF Dxing
Moon-bounce: A Boon for VHF DxingMoon-bounce: A Boon for VHF Dxing
Moon-bounce: A Boon for VHF Dxing
 
“MS-Extractor: An Innovative Approach to Extract Microsatellites on „Y‟ Chrom...
“MS-Extractor: An Innovative Approach to Extract Microsatellites on „Y‟ Chrom...“MS-Extractor: An Innovative Approach to Extract Microsatellites on „Y‟ Chrom...
“MS-Extractor: An Innovative Approach to Extract Microsatellites on „Y‟ Chrom...
 
Importance of Measurements in Smart Grid
Importance of Measurements in Smart GridImportance of Measurements in Smart Grid
Importance of Measurements in Smart Grid
 
Study of Macro level Properties of SCC using GGBS and Lime stone powder
Study of Macro level Properties of SCC using GGBS and Lime stone powderStudy of Macro level Properties of SCC using GGBS and Lime stone powder
Study of Macro level Properties of SCC using GGBS and Lime stone powder
 

Kürzlich hochgeladen

Kürzlich hochgeladen (20)

TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 

B07040308

  • 1. International Journal of Engineering Research and Development e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com Volume 7, Issue 4 (May 2013), PP.03-08 3 Search Log Publishing With Improved Utility Using Confess Algorithm S. Belinsha1 , Mr.A.P.V.Raghavendra2 1 Department of Computer Science and Engineering, VSB Engineering College,Karur 2 Department of Computer Science and Engineering, VSB Engineering College,Karur Abstract:- Search engines are being widely used by the web users. The search engine companies are concerned to produce best search results. Search logs are the records which records the interactions between the user and the search engine. Various search patterns, user’s behaviors can be analyzed from these logs, which will help to enhance the search results. Publishing these search logs to third party for analysis is a privacy issue. Zealous algorithm of filtering the frequent search items in the search log looses its utility in the course of providing privacy. The proposed confess algorithm extends the work by qualifying the infrequent search items in the log which tends to increase the utility of the search log by preserving the privacy. Confess algorithm involves qualifying the infrequent keywords, URL clicks in the search log and publishing it along with the frequent items. Keywords:- utility, information service, privacy, search logs, search behaviour, infrequent items, search items, threshold I. INTRODUCTION Web stores large amount of information. The information is retrieved by means of various techniques which termed as web mining. Privacy preservation is the hot topic in the world of web. Web mining involves web structure mining, web content mining and web usage mining. Analyzing and studying the search log falls under the category of web usage mining. Search logs are confined with the search engines. Search engines are the applications which support users to browse the web in an efficient way. Nowadays web users are more dependent on search engines to access the web. The search engine companies bend more to produce best search results to the users. Search logs are the record of interactions between the users and the search engine. It holds the data like the user id, search keywords, URL (Uniform Resource Locator) clicks, date and time of search. Publishing search log can be done in two ways: Providing the log to the third party and deploying the log for the search engine functions. The information in the log supports the analysis of the user’s search behaviors and patterns which helps to enhance the search results. Analyzing the search log is performed by the research community. When these logs are provided to third party it should provide a privacy guarantee to the users of the search logs. When privacy is focused, the utility i.e. the number of items released in the log, is decreased as it involves elimination of more records. When the AOL (American On Line) search log release is concerned, the log was released with replacing the user id with the random numbers[1]. The privacy factor compromised with more utility. Hence there is always a trade of between privacy and utility. So holding back the user’s identity alone does not guarantee privacy. The user’s identity can also be revealed by the formation of queries and the link followed by the user. The keywords also may involve the sensitive information like social security number, credit card number and also certain demographic information. Hence the focus has to be made on these items and strong strategies have to be followed to release the keywords formed by the users in the search log. Earlier work involved the release of the logs with replacing the user identity with random numbers [2]. But this was not promising one because of less privacy concern and is prone to background linkage attack [1]. Also by the keywords formed by the users, the user’s identity can be revealed. The AOL search log release stands as an example to this case. It released the search logs of several users by replacing with the random id but the public were able to identify certain users, by their formation of queries. Later the work was extended to anonymization [2], where the similar items were grouped and released. Achieving k-anonymity, l-diversity are some of the privacy preserving techniques used. The dilemma in those techniques was that it was prone to background knowledge attack [1]. The crucial effect produced as a result was that it lost the uniqueness of the user’s search. The same effect was the case in generalization techniques also. Zealous algorithm [1] was proposed to release the frequent items in the log by two threshold framework. The frequent queries are more privacy promising. A keyword may become frequent when it is a common public interest and when it is published it provides the less chance of identifying the user. Publishing
  • 2. Search Log Publishing With Improved Utility Using Confess Algorithm 4 the frequent items alone will not contribute to the utility of the log further certain infrequent items also must be considered. In practical, the search log may contain less frequent items than several infrequent items. The infrequent items may have more probability of identifying an user. But there exists some infrequent queries which are of public interest and relevant to the frequent query. Hence the confess algorithm tries to find out such keywords and their corresponding URL click values and publishes it in the search log. To qualify the infrequent keywords and URL clicks in the log, separate qualifying strategies are needed to be formulated. Hence different qualifying constraints are set to qualify the keywords and the URL clicks. The confess log obtained is applied to serve the search engine functions such as providing query suggestion, query substitution. With the results the performance is studied and evaluations are made. The confess log publishing strategy is also applied to the search engines and the effectiveness was studied in comparison with the zealous algorithm. The zealous and the confess log were compared in terms of the average number of items published in log. II. ZEALOUS ALGORITHM The Zealous algorithm uses a two phase framework to discover the frequent items in the log and finally publishes it. To discover the frequent items, the Zealous algorithm uses two threshold values. The first threshold value is set based on the number of user contributions in the log. The Laplacian noise is added to the first set threshold value and the items are filtered by the set values. The addition of noise is to divert the attackers and produce a non-exact statistics [4]. By this method of finding the frequent items, the result log achieves probabilistic differential privacy. The main objective of Zealous algorithm is to figure out the frequent items in the log. The Zealous algorithm is applied to a sample search log collected from a local search engine to the items in the log like keywords and URL values. The log contained more than 200 entries with 58 users. The Zealous algorithm was applied to the log with the threshold values in the table. Table I: Keyword log of Zealous The above are the keywords which have passed the filtration of the two phase framework. These keywords are identified as frequent keywords. Similarly it identifies the frequent URL clicks in the log by the two threshold values. Table II : URL log of Zealous However, Zealous algorithm leaves out the infrequent keywords in the log. However setting upon the threshold value is a challenging task. But in a search log, there will be several infrequent items. The infrequent item which has no possibility of revealing an user’s identity has to be identified and it has to be published. Hence confess is proposed to qualify such infrequent items in the log. III. CONFESS ALGORITHM The confess algorithm follows the Zealous algorithm to trace out the frequent items. It isolates the frequent and the infrequent items and the further processing is done to qualify the infrequent items. The Zealous algorithm uses a two phase threshold framework to identify the frequent items. The infrequent items are then retrieved from the log and the following constraints are checked against the items like keyword and URL clicks. The two items considered to be qualified are the keywords and the URL click as they bind more user’s information.
  • 3. Search Log Publishing With Improved Utility Using Confess Algorithm 5 A. Qualifying the keyword The keywords are the prime input of the user through which the user explores his needs in the web. The keywords formed by the user reveal more private information about the users. This will be a gold mine for the researchers to know the user’s identity. So several strategies are formulated to qualify the keywords that are privacy promising [5]. 1)Profile information: The users are registered before performing the search. The users have to provide certain mandatory information for the registration. The infrequent queries are initially checked with the profile information to check whether it contain any sensitive data. If so, then they are not used for further processing. Consider the keyword 07480433 of a user. This keyword contains the social security number, which is likely to reveal the identity of an user. This is identified by comparing the items with the profile information registered by the user. In case, the keyword contains the profile information given by the user, then the keyword is not qualified. In this way, if the keyword contains the information like name, date of birth, phone numbers, social security numbers, address information, they can be identified and prohibited from publishing. 2) Sub keyword checking: The keywords formed by different users are different and holds user’s uniqueness. The infrequent keyword is compared with the frequent keyword to find there is any sub keyword. If any such sub keyword is found in the infrequent keyword, then the keyword is qualified. Consider the keyword “lecture notes about search logs” is the frequent keyword as discovered by the Zealous algorithm. The keyword “about search logs” is an infrequent keyword. But it is a sub keyword of the frequent item. If such infrequent item exists then those keywords are qualified to be published. This may improve the addition of useful entries in the log. B. Qualifying the URL clicks URL are the data which helps to know the location of a resource in the web. The URL clicks are the important item in the log, which points out the user’s visiting of the web pages. The keywords and URL clicks together can lead to identifying an user. Hence certain constraints are set to qualify the URL clicks. 1) URL shortening: The URL(Uniform Resource Locator) reveals the location of a resource in the web environment. Normally an URL contains the fields like protocol, authority, filename, host, path, port. The complete URL of an user click is likely to reveal the user’s identity and hence the attributes like filename, path are removed. This procedure would conceal the exact visit of the user. Consider the URL click, https://developer.cebv.in/search-appliance/document /50/help_mini/status_log, is shortened as “https://develepor.cbev.in. These shortening of the URL provide a less information about the page visited. Sometimes revealing the complete URL value would identify an user. This is done to preserve the privacy. 2)Multiple visit to same URL: A user obtains several search results for the keyword provided for searching. The user chooses the link appropriate to his search intension. The several links chosen by the user may point to the same URL. This reveals that the user finds the information in that page which satisfies their need. Consider the keyword, exam results in the log. The URL clicked by the user from the search results are, http://www.results.in/colleges/BEresults.html http://www.results.in/colleges/MCAres.html http://www.results.in/colleges/MEresutlts.html http://www.results.in/colloges/MBAres.html The above clicks of the user reveal that he finds the intended content on the web page http://www.chennairesults.in.The mentioned URL of the page is then qualified and is included in the published log. When multiple link pointing an URL is listed in the search engine showcase that it is a prevalent page which is offering more beneficial information regarding the input keyword and hence it can also be privacy promising. 3)The URL with the keyword: The user searches by the keyword and obtains search results. Probably the URL chosen by the user may contain the keyword as its sub term. This denotes that it was a relevant click by the user. Such URLs can be included in the published log. Consider the keyword, exam results is in the search log. The URL clicked by the user is http://www.examinfo.in then this URL is added in the published log. The URL containing the keywords which is chosen by the user, i.e. the entry in the log, showcase that the web page is of common interest. This highly depends on the user’s way of providing the keyword and following the links in the result.
  • 4. Search Log Publishing With Improved Utility Using Confess Algorithm 6 4)URL of top ranked pages: The selection of the link or the page of the user for a keyword from the search results may be due to various intensions. When the clicked page is one of the top ranked page, then the URL of the page can be published. The frequently visited page of an user is also considered to be published in the log. The top ranked pages are safe enough to be published in the log[1]. By the above constraints, the infrequent URL clicks and keywords of the users are qualified and published in the log which intends to improve the utility of the published log. The confess algorithm is applied to the keywords and the URL clicks of the several users in the search log. IV. RESULTS The following tables depicts the results produced by the confess algorithm on the search log which was used up by zealous algorithm. Table III : Keyword log of Confess The above is the keyword log produced as the result of applying confess algorithm of finding the infrequent items. It can be noted that the keywords which are qualified is the part of the frequent keyword. Releasing such keyword, would improve the utility as the log will contain more entries when published. Table IV: URL log of Confess The above log produces the qualified infrequent URL clicks along with the frequent URLs. After qualifying the items in the search log i.e. keywords and URL clicks, they are compared with the entries in the search log. The entries with the qualified keyword, URL click, date and time of the users. Table V : Portion of the search log after qualification of the items
  • 5. Search Log Publishing With Improved Utility Using Confess Algorithm 7 The above log is the portion of the search log after qualification. The log contains User- id(U),Keyword(K),URL-click(U) and the Timestamp(T). The log retains the user’s id to carry the uniqueness of the each users in the log. If user’s id is eliminated it would loose various session information because the user’s uniqueness will not be obvious V. COMPARATIVE STUDY The performance of the confess algorithm is analyzed through various parameters like response time, average number of items published in the log. Then the proposed confess algorithm is compared with the zealous algorithm to swot up the performance in terms of utility produced by the log. The below statistics show the average number of keywords published in the zealous log and the confess log. The average number of keyword(Nk) is the ratio of the number of items released in the log to the total number of items in the original log. To perform this study various experimental search logs are considered. Table - 1.6 : Comparison with average number of keywords With the above statistics the graph is generated as below. Figure - 1.1 : Comparison with average number of keywords It can be inferred that the confess keyword log outputs more keywords when compared to zealous logs and at some instance, the average keywords produced is almost equal. This is highly probabilistic because it depends on the user’s intention of forming keywords. The below statistics show the average number of keywords published in the zealous log and the confess log. The average number of url-click(Nu) in the log is the ratio of the number of items in the published log to the number of items in the original unprocessed log. This metric considered for the study. Table – 1.7 : Comparison with average number of URL clicks With these statistical data a graph is generated below. Figure - 1.2 : Comparison with average number of keywords
  • 6. Search Log Publishing With Improved Utility Using Confess Algorithm 8 It can be inferred that the confess log also outputs more URL clicks than zealous log, that which are maintaining the privacy of the user. It can be noticed that the URL log produces more utility than the keyword log, as it qualifies more URL clicks. From the above studies, it can be inferred that qualifying infrequent items in the log would enhance the utility of the published log. The resultant log can be deployed to support various search engine functions which would reduce the time complexity in the usage of the log when compared to the original unprocessed search log. VI. APPLICATIONS As confess log produces more utility in the published log, the log can be applied for various search engine functions like index caching, query substitution, query suggestions. These activities must be processed quickly to give better search experience for the users. The time consumption reduces when confess log is consumed rather than the original log. The utility of the log will be increased than that of the Zealous log, and helps to achieve privacy also. VII. CONCLUSION By the above studies, it can be inferred that the average number of items released is more. Hence the utility of the search log is improved by including the qualified infrequent items from the log. Also publishing those infrequent items will not disturb the privacy of the users as it has to satisfy various constraints which are privacy promising. VIII. FUTURE ENHANCEMENT Several better qualifying criteria can be set to qualifying the infrequent keywords and URL clicks. Also the work can be extended in setting constraints for the unregistered in the search engines whose However challenges still lies in discovering the frequent items in search logs. Efficient method to discover the frequent items can also be formulated. REFERENCES [1]. Michaela Gotz, Ashwin Machanavajjnala, Guozhang Wang, Xiaokui Xiao and Johannes Gehreke, ”Publishing search logs – A comparative study of privacy guarentees”, IEEE transactions on knowledge and data engineering,Vol.24, No.3, March 2012. [2]. E. Adar, “User 4xxxxx9: Anonymizing Query Logs” Proc. World Wide Web (WWW) Workshop Query Log Analysis, 2007. [3]. A. Korolova, K. Kenthapadi, N. Mishra, and A. Ntoulas, “Releasing Search Queries and Clicks Privately,” Proc. 18th Int’l Conf. World Wide Web (WWW), 2009 [4]. C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor, “Our Data Ourselves: Privacy via Distributed Noise Generation” Proc. Ann. Int’l Conf. Theory and Applications of Cryptographic Techniques (EUROCRYPT), 2006. [5]. V. S. Iyengar, “Transforming data to satisfy privacy constraints” in ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2002.