SlideShare ist ein Scribd-Unternehmen logo
1 von 99
Downloaden Sie, um offline zu lesen
Anna University Sponsored FDP on
CS8492 - Database Management Systems
Topics: Recent Research Perspective in Different Database
Management Systems, Importance of DBMS in Digital India
Date: 06.12.2018
Venue: University College of Engineering Tindivanam,
Tindivanam.
Dr.A.Kathirvel,Professor & HoD, Computer Science & Engg.
M N M Jain Engineering College, Chennai
Data Mining & Big Data
Challenges and Research Opportunities
3
10 Challenging Problems in Data Mining
Research
4
1. Developing a Unifying Theory of Data
Mining
• The current state of the art of
data-mining research is too ``ad-
hoc“
– techniques are designed for
individual problems
– no unifying theory
• Needs unifying research
– Exploration vs explanation
• Long standing theoretical issues
– How to avoid spurious
correlations?
• Deep research
– Knowledge discovery on hidden
causes?
– Similar to discovery of Newton’s
Law?
An Example (from Tutorial Slides by Andrew Moore ):
•VC dimension. If you've got a learning
algorithm in one hand and a dataset in the
other hand, to what extent can you decide
whether the learning algorithm is in danger
of overfitting or underfitting?
– formal analysis into the fascinating
question of how overfitting can
happen,
– estimating how well an algorithm will
perform on future data that is solely
based on its training set error,
– a property (VC dimension) of the
learning algorithm. VC-dimension
thus gives an alternative to cross-
validation, called Structural Risk
Minimization (SRM), for choosing
classifiers.
– CV,SRM, AIC and BIC.
2. Scaling Up for High Dimensional Data
and High Speed Streams
• Scaling up is needed
– ultra-high dimensional classification problems (millions or billions of
features, e.g., bio data)
– Ultra-high speed data streams
• Streams
– continuous, online process
– e.g. how to monitor network packets for intruders?
– concept drift and environment drift?
– RFID network and sensor network data
Excerpt from Jian Pei’s Tutorial
http://www.cs.sfu.ca/~jpei/
5
3. Sequential and Time Series Data
• How to efficiently and accurately
cluster, classify and predict the
trends ?
• Time series data used for
predictions are contaminated by
noise
– How to do accurate short-term
and long-term predictions?
– Signal processing techniques
introduce lags in the filtered data,
which reduces accuracy
– Key in source selection, domain
knowledge in rules, and
optimization methods
Real time series data obtained from
Wireless sensors in Hong Kong UST
CS department hallway
6
4. Mining Complex Knowledge from
Complex Data
• Mining graphs
• Data that are not i.i.d. (independent and identically distributed)
– many objects are not independent of each other, and are not of a single type.
– mine the rich structure of relations among objects,
– E.g.: interlinked Web pages, social networks, metabolic networks in the cell
• Integration of data mining and knowledge inference
– The biggest gap: unable to relate the results of mining to the real-world
decisions they affect - all they can do is hand the results back to the user.
• More research on interestingness of knowledge
Author (Paper1)Title
Conference Name
7
5. Data Mining in a Network Setting
• Community and Social Networks
– Linked data between emails, Web
pages, blogs, citations, sequences
and people
– Static and dynamic structural
behavior
• Mining in and for Computer
Networks
– detect anomalies (e.g., sudden
traffic spikes due to a DoS
(Denial of Service) attacks
– Need to handle 10Gig Ethernet
links (a) detect (b) trace back (c )
drop packet
Picture from Matthew Pirretti’s slides,penn state
An Example of packet streams (data courtesy
of NCSA, UIUC)
8
6. Distributed Data Mining and Mining
Multi-agent Data
• Need to correlate the data seen at
the various probes (such as in a
sensor network)
• Adversary data mining:
deliberately manipulate the data to
sabotage them (e.g., make them
produce false negatives)
• Game theory may be needed for
help
• Games
Player 1:miner
9
Player 2
Action: H
H
T H
T
T
(-1,1) (-1,1)(1,-1) (1,-1)
Outcome
7. Data Mining for Biological and
Environmental Problems
• New problems raise new
questions
• Large scale problems especially
so
– Biological data mining, such as
HIV vaccine design
– DNA, chemical properties, 3D
structures, and functional
properties → need to be fused
– Environmental data mining
– Mining for solving the energy
crisis
10
8. Data-mining-Process Related Problems
• How to automate mining process?
– the composition of data mining
operations
– Data cleaning, with logging
capabilities
– Visualization and mining
automation
• Need a methodology: help users
avoid many data mining mistakes
– What is a canonical set of data
mining operations?
Sampling
Feature Sel
Mining…
11
9. Security, Privacy and Data Integrity
• How to ensure the users privacy
while their data are being mined?
• How to do data mining for
protection of security and privacy?
• Knowledge integrity assessment
– Data are intentionally modified
from their original version, in
order to misinform the recipients
or for privacy and security
– Development of measures to
evaluate the knowledge integrity
of a collection of
• Data
• Knowledge and patterns
http://www.cdt.org/privacy/
Headlines (Nov 21 2005)
Senate Panel Approves Data Security Bill
- The Senate Judiciary Committee on
Thursday passed legislation designed to
protect consumers against data security
failures by, among other things, requiring
companies to notify consumers when their
personal information has been
compromised. While several other
committees in both the House and Senate
have their own versions of data security
legislation, S. 1789 breaks new ground by
including provisions permitting consumers
to access their personal files …
12
10. Dealing with Non-static, Unbalanced
and Cost-sensitive Data
• The UCI datasets are small and
not highly unbalanced
• Real world data are large (10^5
features) but only < 1% of the
useful classes (+’ve)
•There is much information on
costs and benefits, but no overall
model of profit and loss
•Data may evolve with a bias
introduced by sampling
• Each test incurs a cost
• Data extremely unbalanced
• Data change with time
temperature
39oc
13
pressure
?
blood test
?
cardiogram
?
essay
?
New Challenges
• Privacy-preserving data mining
• Data mining over compartmentalized databases
Inducing Classifiers over Privacy
Preserved Numeric Data
30 | 25K | … 50 | 40K | …
becomeRs andomizer
65
(30+35)
65 | 50K | …
Randomizer
35 | 60K | …
Reconstruct
Age Distribution
Reconstruct
Salary Distribution
Decision Tree
Algorithm
Model
30
Alice’s
age
Alice’s
salary
John’s
age
Other Recent Work
• Cryptographic approach to privacy-preserving data mining
– Lindell & Pinkas, Crypto 2000
• Privacy-Preserving discovery of association rules
– Vaidya & Clifton, KDD2002
– Evfimievski et.Al, KDD 2002
– Rizvi & Haritsa, VLDB 2002
Credit
Agencie
s
Crimina
l
Records
Demo-
graphi
c
Birth Marriage
Phone
Email
"Frequent Traveler" Rating Model
State
Local
Computation over Compartmentalized
Databases
Randomized Data
Shipping
Local
computations
followed by
combination of
partial models
On-demand secure
data shipping and
data composition
Some Hard Problems
• Past may be a poor predictor of future
– Abrupt changes
– Wrong training examples
• Actionable patterns (principled use of domain knowledge?)
• Over-fitting vs. not missing the rare nuggets
• Richer patterns
• Simultaneous mining over multiple data types
• When to use which algorithm?
• Automatic, data-dependent selection of algorithm parameters
Discussion
• Should data mining be viewed as “rich’’ querying and “deeply’’ integrated
with database systems?
– Most of current work make little use of database functionality
• Should analytics be an integral concern of database systems?
• Issues in data mining over heterogeneous data repositories (Relationship to
the heterogeneous systems discussion)
20
Summary
• Developing a Unifying Theory of Data Mining
• Scaling Up for High Dimensional Data/High Speed Streams
• Mining Sequence Data and Time Series Data
• Mining Complex Knowledge from Complex Data
• Data Mining in a Network Setting
• Distributed Data Mining and Mining Multi-agent Data
• Data Mining for Biological and Environmental Problems
• Data-Mining-Process Related Problems
• Security, Privacy and Data Integrity
• Dealing with Non-static, Unbalanced and Cost-sensitive Data
Summary
• Data mining has shown promise but needs much more further research
We stand on the brink of great new answers, but even more, of great new
questions -- Matt Ridley
Introduction to Big Data
What are we going to understand
• What is Big Data?
• Why we landed up there?
• To whom does it matter
• Where is the money?
• Are we ready to handle it?
• What are the concerns?
• Tools and Technologies
– Is Big Data <=> Hadoop
Simple to start
• What is the maximum file size you have dealt so far?
– Movies/Files/Streaming video that you have used?
– What have you observed?
• What is the maximum download speed you get?
• Simple computation
– How much time to just transfer.
What is big data?
• “Every day, we create 2.5 quintillion bytes of data — so much that 90%
of the data in the world today has been created in the last two years
alone. This data comes from everywhere: sensors used to gather climate
information, posts to social media sites, digital pictures and videos,
purchase transaction records, and cell phone GPS signals to name a
few.
This data is “big data.”
Huge amount of data
• There are huge volumes of data in the world:
– From the beginning of recorded time until 2003,
– We created 5 billion gigabytes (exabytes) of data.
– In 2011, the same amount was created every two days
– In 2013, the same amount of data is created every 10 minutes.
Big data spans three dimensions: Volume, Velocity and
Variety
• Volume: Enterprises are awash with ever-growing data of all types, easily amassing
terabytes—even petabytes—of information.
– Turn 12 terabytes of Tweets created each day into improved product sentiment
analysis
– Convert 350 billion annual meter readings to better predict power consumption
• Velocity: Sometimes 2 minutes is too late. For time-sensitive processes such as catching
fraud, big data must be used as it streams into your enterprise in order to maximize its
value.
– Scrutinize 5 million trade events created each day to identify potential fraud
– Analyze 500 million daily call detail records in real-time to predict customer churn
faster
– The latest I have heard is 10 nano seconds delay is too much.
• Variety: Big data is any type of data - structured and unstructured data such as text,
sensor data, audio, video, click streams, log files and more. New insights are found when
analyzing these data types together.
– Monitor 100’s of live video feeds from surveillance cameras to target points of
interest
– Exploit the 80% data growth in images, video and documents to improve customer
satisfaction
Finally….
`Big- Data’is similar to ‘Small-data’but bigger
.. But having data bigger it requires different approaches:
Techniques, tools, architecture
… with an aim to solve new problems
Or old problems in a better way
Whom does it matter
• Research Community ☺
• Business Community - New tools, new capabilities, new infrastructure,
new business models etc.,
• On sectors
Financial Services..
How are revenues looking like….
The Social Layer in an Instrumented
Interconnected World
2+
billion
people on
the Web
by end
2011
30 billion
RFID tags
today
(1.3B in 2005)
4.6
billion
camera
phones
world wide
100s of
millions
of GPS
enabled
devices
sold
annually
76 million
smart meters in
2009… 200M by
2014
12+ TBs
of tweet data
every day
25+ TBs of
log data every
day
?TBsof
dataeveryday
What does Big Data trigger?
• From “Big Data and the Web: Algorithms for Data Intensive Scalable Computing”, Ph.D Thesis, Gianmarco
BIG DATA is not just HADOOP
Understand and navigate
federated big data sources
Manage & store huge volume
of any data
Structure and control data
Federated Discovery and Navigation
Hadoop File System
MapReduce
Data Warehousing
Manage streaming data Stream Computing
Analyze unstructured data
Integrate and govern all
data sources
Text Analytics Engine
Integration, Data Quality, Security,
Lifecycle Management, MDM
Types of tools typically used in Big
Data Scenario
• Where is the processing hosted?
– Distributed server/cloud
• Where data is stored?
– Distributed Storage (eg:Amazon s3)
• Where is the programming model?
– Distributed processing (Map Reduce)
• How data is stored and indexed?
– High performance schema free database
• What operations are performed on the data?
– Analytic/Semantic Processing (Eg. RDF/OWL)
When dealing with Big Data is hard
• When the operations on data are complex:
– Eg. Simple counting is not a complex problem.
– Modeling and reasoning with data of different kinds can get extremely complex
• Good news with big-data:
– Often, because of the vast amount of data, modeling techniques can get simpler
(e.g., smart counting can replace complex model-based analytics)…
– …as long as we deal with the scale.
Why Big-Data?
• Key enablers for the appearance and growth of ‘Big-Data’are:
– Increase in storage capabilities
– Increase in processing power
– Availability of data
BIG DATA: CHALLEGES AND
OPPORTUNITIES
38
WHAT IS BIG DATA?
 Big Data is a popular term used to denote the exponential growth
and availability of data
- both structured and unstructured.
Very important! Why?
More data leads to better analysis enabling better
decisions!
39
Definition of BIG DATA
• As far back as 2001, industry analyst Doug Laney (currently with Gartner)
articulated the now mainstream definition of big data as three Vs:
Volume, Velocityand Variety.
Let us look at each one of these items.
Definition of BIG DATA: Volume
• Factors that contribute to increase in volume of data:
– Transactions-based data stored through the years.
– Unstructured data streaming in from social media
– Ever increasing amounts of sensor and machine-to-machine data being collected
40
41
Definition of BIG DATA: Volume..
• Previously data storage was a big issue ; today it is not. But other issues
emerge such as:
• How to determine relevance within large data volumes?
• How to use analytics to create value from relevant data.
42
Definition of BIG DATA:Velocity
• Data is streaming in at unprecedented speed and must be dealt with in a
timely manner.
• RFID tags, sensors and smart metering are driving the need to deal with
torrents of data in near-real time.
• Reacting quickly enough to deal with data velocity is a challenge for most
organizations.
43
Definition of BIG DATA: Variety
• Data today comes in all types of formats. Structured, numeric data in
traditional databases.
• Information created from line-of-business applications; Unstructured text
documents, email, video, audio, stock ticker data and financial transactions.
• Managing, merging and governing different varieties of data is something
many organizations still grapple with.
44
Definition of BIG DATA: Others
• Variability:
In addition to the increasing velocities and varieties of data, data flows can be
highly inconsistent with periodic peaks.
• Is something trending in social media? Daily, seasonal and event-triggered
peak data loads can be challenging to manage.
• Even more so when unstructured data is involved.
45
Definition of BIG DATA: Others
• Complexity:
Today's data comes from diverse sources.
• It is still an undertaking to link, match, cleanse and transform data across
systems.
• It is necessary to connect and correlate relationships, hierarchies and
multiple data linkages or our data can quickly spiral out of control.
• PRIVACY: Associated Problems
46
BIG PROBLEMS
• The problems start right away during data acquisition, when the data
tsunami requires us to make decisions, currently in an ad hoc manner,
about:
• what data to keep?
• what to discard?
• how to store?
• what we keep reliably with the right metadata.
• Much data today is not in structured format;
• for example, tweets and blogs are weakly structured pieces of text, while
images and video are structured for storage and display, but not for
semantic content and search.
• transforming such content into a structured format for later analysis is a
major challenge.
50
What has been Achieved…
• During the last 35 years, data management principles such as:
• physical and logical independence,
• declarative querying and
• cost-based optimization have enabled the first round of business
intelligence applications and laid the foundation for managing and
analyzing Big Data today
48
What is to be done?
• The many novel challenges and opportunities associated with Big Data
necessitate rethinking many aspects of these data management platforms,
while retaining other desirable aspects.
• Appropriate research & investment in Big Data will lead to a new wave of
fundamental technological advances that will be embodied in the next
generations of Big Data management and analysis platforms, products and
systems.
49
Big Data: Opportunity
• In a broad range of application areas, data is being collected at
unprecedented scale.
• Decisions that previously were based on guesswork, or on painstakingly
constructed models of reality, can now be made based on the data itself.
• Such Big Data analysis now drives nearly every aspect of our modern
society, including mobile services, retail, manufacturing, financial
services, life sciences, and physical sciences.
50
Big Data: Opportunity
• Scientific research has been revolutionized by Big Data.
• The Sloan Digital Sky Survey has today become a central resource for
astronomers the world over.
• The field of Astronomy is being transformed from one where taking
pictures of the sky was a large part of an astronomer’s job to one where the
pictures are all in a database already and the astronomer’s task is to find
interesting objects and phenomena in the database.
51
Big Data: Opportunity
• In the biological sciences, there is now a well-established tradition of
depositing scientific data into a public repository, and also of creating
public databases for use by other scientists.
• There is an entire discipline of bioinformatics that is largely devoted to the
curation and analysis of such data.
• As technology advances, particularly with the advent of Next Generation
Sequencing, the size and number of experimental data sets available is
increasing exponentially.
55
Big Data: Opportunities
• Big Data has the potential to revolutionize not just research, but also
education.
• A recent detailed quantitative comparison of different approaches taken by
35 charter schools in NYC has found that one of the top five policies
correlated with measurable academic effectiveness was the use of data to
guide instruction .
• Imagine a world in which we have access to a huge database where we
collect every detailed measure of every student's academic performance
53
Big Data: Opportunities
• This data could be used to design the most effective approaches to
education, starting from reading, writing, and math, to advanced, college-
level, courses.
• We are far from having access to such data, but there are powerful trends
in this direction.
• In particular, there is a strong trend for massive Web deployment of
educational activities, and this will generate an increasingly large amount of
detailed data about students' performance
54
Big Data: Opportunities: Health Care
• It is widely believed that the use of information technology:
• can reduce the cost of healthcare
• improve its quality by making care more preventive and personalized
and basing it on more extensive (home-based) continuous monitoring
55
Big Data: Opportunities:Others
• Effective use of Big Data for urban planning (through fusion of high-
fidelity geographical data)
• Intelligent transportation (through analysis and visualization of live and
detailed road network data)
• Environmental modeling (through sensor networks ubiquitously
collecting data)
• Energy saving (through unveiling patterns of use)
• Smart materials (through the new materials genome initiative),
• Computational social sciences
56
Big Data: Opportunities:Others
• financial systemic risk analysis (through integrated analysis of a web of
contracts to find dependencies between financial entities)
• homeland security (through analysis of social networks and financial
transactions of possible terrorists)
• computer security (through analysis of logged information and other
events, known as Security Information and Event Management (SIEM))
60
Challenges
• The sheer size of the data, Data Volume of course, is a major challenge,
and is the one that is most easily recognized.
• However, there are others. Industry analysis companies like to point out that
there are challenges not just in Volume, but also in Variety and Velocity.
• While these three are important, this short list fails to include additional
important requirements such as privacy and usability.
BIG DATA PIPE LINE
58
59
The FIVE PHASES
• There are five distinct phases in handling Big Data
• Acquisition/Recording
• Extraction, Cleaning/Annotation
• Integration/Aggregation/Representation
• Analysis/Modeling
• Interpretation
60
Data Acquisition and Recording(1/3)
• Big Data does not arise out of a vacuum: it is recorded from some data
generating source.
• Scientific experiments and simulations can easily produce petabytes of data
today.
• Much of this data is of no interest, and it can be filtered and compressed
by orders of magnitude.
• One challenge is to define these filters in such a way that they do not
discard useful information
61
Data Acquisition and Recording(2/3)
• The second big challenge is to automatically generate the right metadata
to describe what data is recorded and how it is recorded and measured.
• For example, in scientific experiments, considerable detail regarding
specific experimental conditions and procedures may be required to be
able to interpret the results correctly, and it is important that such metadata
be recorded with observational data. Metadata acquisition systems can
minimize the human burden in recording metadata.
62
Data Acquisition and Recording(3/3)
 Another important issue here is data provenance.
 Recording information about the data at its birth is not useful unless this
information can be interpreted and carried along through the data analysis
pipeline.
 For example, a processing error at one step can render subsequent analysis
useless; with suitable provenance, we can easily identify all subsequent
processing that dependent on this step.
 Thus we need research both into generating suitable metadata and into data
systems that carry the provenance of metadata through data analysis
pipelines.
63
Provenance:meaning
• The place of origin or earliest known history of something.
• The history of the ownership of an object, especially when documented or
authenticated.
• Provenance Sentence Example:
• Items with a known provenance excluding them from further scrutiny.
64
Information Extraction and Cleaning (1/4)
 Frequently, the information collected will not be in a format ready
for analysis.
 For example, consider the collection of electronic health records in
a hospital, comprising:
transcribed dictations from several physicians, structured data from
sensors
measurements (possibly with some associated uncertainty)
65
Information Extraction and Cleaning (2/4)
• Image data such as x-rays.
• We cannot leave the data in this form and still effectively analyze it.
• we require an information extraction process that pulls out the required
information from the underlying sources and expresses it in a structured
form suitable for analysis
66
Information Extraction and Cleaning (3/4)
• Doing this correctly and completely is a continuing technical challenge.
• Note that this data also includes images and will in the future include
video; such extraction is often highly application dependent (e.g., what
you want to pull out of an MRI is very different from what you would pull
out of a picture of the stars, or a surveillance photo).
• In addition, due to the ubiquity of surveillance cameras and popularity of
GPS-enabled mobile phones, cameras, and other portable devices, rich and
high fidelity location and trajectory (i.e., movement in space) data can
also be extracted.
70
Information Extraction and Cleaning (4/4)
• We are used to thinking of Big Data as always telling us the truth, but this
is actually far from reality.
• For example, patients may choose to hide risky behavior and care givers
may sometimes mis-diagnose a condition; patients may also inaccurately
recall the name of a drug or even that they ever took it, leading to missing
information in (the history portion of) their medical record.
• Existing work on data cleaning assumes well-recognized constraints on
valid data or well-understood error models; for many emerging Big Data
domains these do not exist.
68
Data Integration, Aggregation, and
Representation(1/4)
• Data analysis is considerably more challenging than simply locating,
identifying, understanding, and citing data.
• For effective large-scale analysis all of this has to happen in a completely
automated manner. This requires differences in data structure and
semantics to be expressed in forms that are computer understandable, and
then “robotically” resolvable.
• There is a strong body of work in data integration that can provide some of
the answers. However, considerable additional work is required to achieve
automated error-free difference resolution.
69
Data Integration,Aggregation, and
Representation (2/4)
• Even for simpler analyses that depend on only one data set, there remains
an important question of suitable database design.
• Usually, there will be many alternative ways in which to store the same
information.
• Certain designs will have advantages over others for certain purposes,
and possibly drawbacks for other purposes.
70
Data Integration,Aggregation, and
Representation(3/4)
• Witness, for instance, the tremendous variety in the structure of
bioinformatics databases with information regarding substantially similar
entities, such as genes.
• Database design is today an art, and is carefully executed in the enterprise
context by highly-paid professionals.
• We must enable other professionals, such as domain scientists, to create
effective database designs, either through devising tools to assist them in
the design process or through forgoing the design process completely and
developing techniques so that databases can be used effectively in the
absence of intelligent database design.
71
Data Integration,Aggregation, and
Representation (4/4)
• Database design is today an art, and is carefully executed in the enterprise
context by highly-paid professionals.
• We must enable other professionals, such as domain scientists, to create
effective database designs, either through devising tools to assist them in
the design process or through forgoing the design process completely and
developing techniques so that databases can be used effectively in the
absence of intelligent database design.
72
Query Processing, Data Modeling, and
Analysis(1/6)
• Methods for querying and mining Big Data are fundamentally different
from traditional statistical analysis on small samples.
• Big Data is often noisy, dynamic, heterogeneous, inter-related and
untrustworthy.
• Nevertheless, even noisy Big Data could be more valuable than tiny
samples because general statistics obtained from frequent patterns and
correlation analysis usually overpower individual fluctuations and often
disclose more reliable hidden patterns and knowledge.
73
Query Processing, Data Modeling, and
Analysis (2/6)
• Further, interconnected Big Data forms large heterogeneous information
networks, with which information redundancy can be explored to
compensate for :
– missing data,
– to crosscheck conflicting cases,
– to validate trustworthy relationships,
– to disclose inherent clusters, and
– to uncover hidden relationships and models
74
Query Processing, Data Modeling, and
Analysis (3/6)
• Mining requires
Integrated, cleaned, trustworthy, and efficiently accessible data, declarative
query and mining interfaces, scalable mining algorithms, and big-data
computing environments.
• At the same time, data mining itself can also be used to help improve the
quality and trustworthiness of the data, understand its semantics, and
provide intelligent querying functions.
75
Query Processing, Data Modeling, and
Analysis (4/6)
• Real-life medical records have errors,
are heterogeneous, and are distributed across multiple systems.
• The value of Big Data analysis in health care, to take just one example
application domain, can only be realized if it can be applied robustly under
these difficult conditions. On the flip side, knowledge developed from data
can help in correcting errors and removing ambiguity.
76
Query Processing, Data Modeling, and
Analysis (5/6)
• For example, a physician may write “DVT” as the diagnosis for a patient.
This abbreviation is commonly used for both “deep vein thrombosis” and
“diverticulitis,” two very different medical conditions.
• A knowledge-base constructed from related data can use associated
symptoms or medications to determine which of two the physician meant.
80
Query Processing, Data Modeling, and
Analysis (6/6)
• A problem with current Big Data analysis is the lack of coordination
between database systems, which host the data and provide SQL querying,
with analytics packages that perform various forms of non-SQL processing,
such as data mining and statistical analyses.
• Today’s analysts are impeded by a tedious process of exporting data from
the database, performing a non-SQL process and bringing the data back.
This is an obstacle to carrying over the interactive elegance of the first
generation of SQL-driven OLAP systems into the data mining type of
analysis that is in increasing demand.
• A tight coupling between declarative query languages and the functions
of such packages will benefit both expressiveness and performance of the
analysis.
78
Interpretation (1/5)
• Having the ability to analyze Big Data is of limited value if users cannot
understand the analysis. Ultimately, a decision-maker, provided with the
result of analysis, has to interpret these results.
• This interpretation cannot happen in a vacuum.
• it involves examining all the assumptions made and retracing the
analysis.
79
Interpretation (2/5)
• Further, there are many possible sources of error: computer systems can
have bugs, models almost always have assumptions, and results can be
based on erroneous data.
• For all of these reasons, no responsible user will cede authority to the
computer system
• Analyst will try to understand, and verify, the results produced by the
computer.
• The computer system must make it easy for her to do so. This is particularly
a challenge with Big Data due to its complexity. There are often crucial
assumptions behind the data recorded.
80
Interpretation (3/5)
• There are often crucial assumptions behind the data recorded.
• Analytical pipelines can often involve multiple steps, again with
assumptions built in.
• The recent mortgage-related shock to the financial system dramatically
underscored the need for such decision-maker diligence -- rather than
accept the stated solvency of a financial institution at face value, a decision-
maker has to examine critically the many assumptions at multiple stages
of analysis.
81
Interpretation (4/5)
• In short, it is rarely enough to provide just the results.
• one must provide supplementary information that explains how each result
was derived, and based upon precisely what inputs.
• Such supplementary information is called the provenance of the (result)
data.
82
Interpretation (5/5)
• Furthermore, with a few clicks the user should be able to drill down into each
piece of data that she sees and understand its provenance, which is a key
feature to understanding the data.
• That is, users need to be able to see not just the results, but also understand
why they are seeing those results.
• However, raw provenance, particularly regarding the phases in the analytics
pipeline, is likely to be too technical for many users to grasp completely.
• One alternative is to enable the users to “play” with the steps in the analysis –
make small changes to the pipeline, for example, or modify values for some
parameters.
83
Challenges in Big Data Analysis
Heterogeneity and Incompleteness (1/5)
• When humans consume information, a great deal of heterogeneity is
comfortably tolerated
• The nuance and richness of natural language can provide valuable depth.
• Machine analysis algorithms expect homogeneous data, and cannot
understand nuance.
• Hence data must be carefully structured as a first step in (or prior to) data
analysis.
• Example: A patient who has multiple medical procedures at a hospital. We
could create one record per medical procedure or laboratory test, one
record for the entire hospital stay, or one record for all lifetime hospital
interactions of this patient. With anything other than the first design, the
number of medical procedures and lab tests per record would be different
for each patient.
84
Challenges in Big Data Analysis
Heterogeneity and Incompleteness (2/5)
• The three design choices listed have successively less structure and,
conversely, successively greater variety.
• Greater structure is likely to be required by many (traditional) data
analysis systems.
• However, the less structured design is likely to be more effective
for many purposes – for example questions relating to disease
progression over time will require an expensive join operation with
the first two designs, but can be avoided with the latter.
• However, computer systems work most efficiently if they can store
multiple items that are all identical in size and structure. Efficient
representation, access, and analysis of semi-structured data require
further work
85
Challenges in Big DataAnalysis
Heterogeneity and Incompleteness (3/5)
• Consider an electronic health record database design that has fields for birth
date, occupation, and blood type for each patient.
• What do we do if one or more of these pieces of information is not
provided by a patient?
• Obviously, the health record is still placed in the database, but with the
corresponding attribute values being set to NULL..
86
Challenges in Big Data Analysis
Heterogeneity and Incompleteness (4/5)
• A data analysis that looks to classify patients by, say, occupation, must take
into account patients for which this information is not known.
• Worse, these patients with unknown occupations can be ignored in the
analysis only if we have reason to believe that they are otherwise
statistically similar to the patients with known occupation for the analysis
performed.
• For example, if unemployed patients are more likely to hide their
employment status, analysis results may be skewed in that it considers a
more employed population mix than exists, and hence potentially one that
has differences in occupation-related health-profiles
90
Challenges in Big Data Analysis:
Heterogeneity and Incompleteness (5/5)
• Even after data cleaning and error correction, some incompleteness and
some errors in data are likely to remain.
• This incompleteness and these errors must be managed during data
analysis. Doing this correctly is a challenge.
• Recent work on managing probabilistic data suggests one way to make
progress.
88
Challenges in Big Data Analysis:
VOLUME (Scale)(1/9)
• The first thing anyone thinks of with Big Data is its size. After all, the word
“big” is there in the very name. Managing large and rapidly increasing
volumes of data has been a challenging issue for many decades.
• In the past, this challenge was mitigated by processors getting faster.
• But, there is a fundamental shift underway now: data volume is scaling
faster than compute resources, and CPU speeds are static.
89
Challenges in Big Data Analysis:
VOLUME (2/9)
• First, over the last five years the processor technology has made a dramatic
shift - rather than processors doubling their clock cycle frequency every 18-
24 months, now, due to power constraints, clock speeds have largely stalled
and processors are being built with increasing numbers of cores.
• In the past, large data processing systems had to worry about parallelism
across nodes in a cluster; now, one has to deal with parallelism within a
single node.
90
Challenges in Big Data Analysis:
Volume(3/9)
• Unfortunately, parallel data processing techniques that were applied in the
past for processing data across nodes don’t directly apply for intra-node
parallelism.
• this is because the architecture looks very different; for example, there
are many more hardware resources such as processor caches and
processor memory channels that are shared across cores in a single node.
91
Challenges in Big Data Analysis:
VOLUME(4/9)
• Further, the move towards packing multiple sockets (each with 10s of
cores) adds another level of complexity for intra-node parallelism.
• Finally, with predictions of “dark silicon”, namely that power
consideration will likely in the future prohibit us from using all of the
hardware in the system continuously, data processing systems will likely
have to actively manage the power consumption of the processor.
• These unprecedented changes require us to rethink how we design, build
and operate data processing components.
92
Challenges in Big DataAnalysis:
VOLUME(5/9)
• The second dramatic shift that is underway is the move towards cloud
computing.
• Cloud computing aggregates multiple disparate workloads with varying
performance goals into very large clusters.
• Example:Interactive services demand that the data processing engine return
back an answer within a fixed response time cap.
93
Challenges in Big Data Analysis:
Volume(6/9)
• This level of sharing of resources is expensive,
• Large clusters requires new ways of determining-
how to run and execute data processing jobs so that we can meet
the goals of each workload cost-effectively.
• how to deal with system failures, which occur more frequently
as we operate on larger and larger clusters
94
Challenges in Big DataAnalysis
VOLUME(7/9)
• This places a premium on declarative approaches to expressing programs,
even those doing complex machine learning tasks, since global
optimization across multiple users’ programs is necessary for good
overall performance.
• Reliance on user-driven program optimizations is likely to lead to poor
cluster utilization, since users are unaware of other users’ programs.
95
Challenges in Big Data Analysis
VOLUME(8/9)
• A third dramatic shift that is underway is the transformative change of the
traditional I/O subsystem.
• For many decades, hard disk drives (HDDs) were used to store persistent
data.
• HDDs had far slower random IO performance than sequential IO
performance.
• Data processing engines formatted their data and designed their query
processing methods to “work around” this limitation.
96
Challenges in Big DataAnalysis:
VOLUME(9/9)
• But, HDDs are increasingly being replaced by solid state drives today.
• Other technologies such as Phase Change Memory are around the corner.
• These newer storage technologies do not have the same large spread in
performance between the sequential and random I/O performance.
• This requires a rethinking of how we design storage subsystems for Big
data processing systems.
100
Challenges in Big Data Analysis: Timeliness
• The flip side of size is speed.
• The larger the data set to be processed, the longer it will take to analyze.
• The design of a system that effectively deals with size is likely also to
result in a system that can process a given size of data set faster.
• However, it is not just this speed that is usually meant when one speaks of
Velocity in the context of Big Data. Rather, there is an acquisition rate
challenge, and a timeliness challenge.
100
Challenges in Big Data Analysis: Privacy
• The Privacy of data isanother huge concern in the context of big data.
•There are many additional challenging research problems.
•For example, we do not know yet how to share private data while limiting
disclosure and ensuring sufficient data utility in the shared data.
•The existing paradigm of differential privacy is a very important step in
the right direction, but it unfortunately reduces information content too
far in order to be useful in most practical cases.
100
Challenges in Big Data Analysis: Human
Collobaration
•There remain many patterns that humans can easily detect but computer
algorithms have a hard time finding.
•Ideally, analytics for Big Data will not be all computational – rather it will
be designed explicitly to have a human in the loop.
•The new sub-field of visual analytics is attempting to do this, at least
with respect to the modeling and analysis phase in the pipeline.
•There is similar value to human input at all stages of the analysis pipeline

Weitere ähnliche Inhalte

Was ist angesagt?

Introduction to data cleaning with spreadsheets
Introduction to data cleaning with spreadsheetsIntroduction to data cleaning with spreadsheets
Introduction to data cleaning with spreadsheetsAnders Pedersen
 
A Survey on Big Data Analytics
A Survey on Big Data AnalyticsA Survey on Big Data Analytics
A Survey on Big Data AnalyticsBHARATH KUMAR
 
Data Processing in Fundamentals of IT
Data Processing in Fundamentals of ITData Processing in Fundamentals of IT
Data Processing in Fundamentals of ITSanthiNivas
 
Issues, challenges, and solutions
Issues, challenges, and solutionsIssues, challenges, and solutions
Issues, challenges, and solutionscsandit
 
Lect 1 introduction
Lect 1 introductionLect 1 introduction
Lect 1 introductionhktripathy
 
Basics of Research Data Management
Basics of Research Data ManagementBasics of Research Data Management
Basics of Research Data ManagementOpenAIRE
 
9 Data Mining Challenges From Data Scientists Like You
9 Data Mining Challenges From Data Scientists Like You9 Data Mining Challenges From Data Scientists Like You
9 Data Mining Challenges From Data Scientists Like YouSalford Systems
 
introduction to data mining tutorial
introduction to data mining tutorial introduction to data mining tutorial
introduction to data mining tutorial Salah Amean
 
data resource management
 data resource management data resource management
data resource managementsoodsurbhi123
 
data mining
data miningdata mining
data mininguoitc
 
Types of data bases
Types of data basesTypes of data bases
Types of data basesJanu Jahnavi
 

Was ist angesagt? (20)

Introduction to data cleaning with spreadsheets
Introduction to data cleaning with spreadsheetsIntroduction to data cleaning with spreadsheets
Introduction to data cleaning with spreadsheets
 
A Survey on Big Data Analytics
A Survey on Big Data AnalyticsA Survey on Big Data Analytics
A Survey on Big Data Analytics
 
Data Processing in Fundamentals of IT
Data Processing in Fundamentals of ITData Processing in Fundamentals of IT
Data Processing in Fundamentals of IT
 
Data mining
Data miningData mining
Data mining
 
Big data analytics
Big data analyticsBig data analytics
Big data analytics
 
Dw 07032018-dr pl pradhan
Dw 07032018-dr pl pradhanDw 07032018-dr pl pradhan
Dw 07032018-dr pl pradhan
 
Current trends in DBMS
Current trends in DBMSCurrent trends in DBMS
Current trends in DBMS
 
Issues, challenges, and solutions
Issues, challenges, and solutionsIssues, challenges, and solutions
Issues, challenges, and solutions
 
Lecture2 is331 data&amp;infomanag(databaseenv)
Lecture2 is331 data&amp;infomanag(databaseenv)Lecture2 is331 data&amp;infomanag(databaseenv)
Lecture2 is331 data&amp;infomanag(databaseenv)
 
Managing knowledge
Managing knowledgeManaging knowledge
Managing knowledge
 
Database systems introduction
Database systems introductionDatabase systems introduction
Database systems introduction
 
Lect 1 introduction
Lect 1 introductionLect 1 introduction
Lect 1 introduction
 
Basics of Research Data Management
Basics of Research Data ManagementBasics of Research Data Management
Basics of Research Data Management
 
9 Data Mining Challenges From Data Scientists Like You
9 Data Mining Challenges From Data Scientists Like You9 Data Mining Challenges From Data Scientists Like You
9 Data Mining Challenges From Data Scientists Like You
 
introduction to data mining tutorial
introduction to data mining tutorial introduction to data mining tutorial
introduction to data mining tutorial
 
data resource management
 data resource management data resource management
data resource management
 
data mining
data miningdata mining
data mining
 
Types of data bases
Types of data basesTypes of data bases
Types of data bases
 
Database
DatabaseDatabase
Database
 
Types dbms
Types dbmsTypes dbms
Types dbms
 

Ähnlich wie DBMS

Data Mining and Big Data Challenges and Research Opportunities
Data Mining and Big Data Challenges and Research OpportunitiesData Mining and Big Data Challenges and Research Opportunities
Data Mining and Big Data Challenges and Research OpportunitiesKathirvel Ayyaswamy
 
A Lifecycle Approach to Information Privacy
A Lifecycle Approach to Information PrivacyA Lifecycle Approach to Information Privacy
A Lifecycle Approach to Information PrivacyMicah Altman
 
2 introductory slides
2 introductory slides2 introductory slides
2 introductory slidestafosepsdfasg
 
Service and Support for Science IT -Peter Kunzst, University of Zurich
Service and Support for Science IT-Peter Kunzst, University of ZurichService and Support for Science IT-Peter Kunzst, University of Zurich
Service and Support for Science IT -Peter Kunzst, University of ZurichMind the Byte
 
Graham Pryor
Graham PryorGraham Pryor
Graham PryorEduserv
 
01-introduction.ppt the paper that you can unless you want to join me because...
01-introduction.ppt the paper that you can unless you want to join me because...01-introduction.ppt the paper that you can unless you want to join me because...
01-introduction.ppt the paper that you can unless you want to join me because...teodroscampaus
 
NCME Big Data in Education
NCME Big Data  in EducationNCME Big Data  in Education
NCME Big Data in EducationPhilip Piety
 
BIMCV, Banco de Imagen Medica de la Comunidad Valenciana. María de la Iglesia
BIMCV, Banco de Imagen Medica de la Comunidad Valenciana. María de la IglesiaBIMCV, Banco de Imagen Medica de la Comunidad Valenciana. María de la Iglesia
BIMCV, Banco de Imagen Medica de la Comunidad Valenciana. María de la IglesiaMaria de la Iglesia
 
Sirris innovate2011 - Smart Products with smart data - introduction, Dr. Elen...
Sirris innovate2011 - Smart Products with smart data - introduction, Dr. Elen...Sirris innovate2011 - Smart Products with smart data - introduction, Dr. Elen...
Sirris innovate2011 - Smart Products with smart data - introduction, Dr. Elen...Sirris
 
Introduction Data Science.pptx
Introduction Data Science.pptxIntroduction Data Science.pptx
Introduction Data Science.pptxAkhirulAminulloh2
 
Big and Small Web Data
Big and Small Web DataBig and Small Web Data
Big and Small Web DataMarieke Guy
 
Big Data and Data Science: The Technologies Shaping Our Lives
Big Data and Data Science: The Technologies Shaping Our LivesBig Data and Data Science: The Technologies Shaping Our Lives
Big Data and Data Science: The Technologies Shaping Our LivesRukshan Batuwita
 

Ähnlich wie DBMS (20)

Data Mining and Big Data Challenges and Research Opportunities
Data Mining and Big Data Challenges and Research OpportunitiesData Mining and Big Data Challenges and Research Opportunities
Data Mining and Big Data Challenges and Research Opportunities
 
BAS 250 Lecture 1
BAS 250 Lecture 1BAS 250 Lecture 1
BAS 250 Lecture 1
 
Unit 1
Unit 1Unit 1
Unit 1
 
dwdm unit 1.ppt
dwdm unit 1.pptdwdm unit 1.ppt
dwdm unit 1.ppt
 
Big Data
Big Data Big Data
Big Data
 
A Lifecycle Approach to Information Privacy
A Lifecycle Approach to Information PrivacyA Lifecycle Approach to Information Privacy
A Lifecycle Approach to Information Privacy
 
00-01 DSnDA.pdf
00-01 DSnDA.pdf00-01 DSnDA.pdf
00-01 DSnDA.pdf
 
NCCT.pptx
NCCT.pptxNCCT.pptx
NCCT.pptx
 
2 introductory slides
2 introductory slides2 introductory slides
2 introductory slides
 
Service and Support for Science IT -Peter Kunzst, University of Zurich
Service and Support for Science IT-Peter Kunzst, University of ZurichService and Support for Science IT-Peter Kunzst, University of Zurich
Service and Support for Science IT -Peter Kunzst, University of Zurich
 
Graham Pryor
Graham PryorGraham Pryor
Graham Pryor
 
01-introduction.ppt the paper that you can unless you want to join me because...
01-introduction.ppt the paper that you can unless you want to join me because...01-introduction.ppt the paper that you can unless you want to join me because...
01-introduction.ppt the paper that you can unless you want to join me because...
 
NCME Big Data in Education
NCME Big Data  in EducationNCME Big Data  in Education
NCME Big Data in Education
 
BIMCV, Banco de Imagen Medica de la Comunidad Valenciana. María de la Iglesia
BIMCV, Banco de Imagen Medica de la Comunidad Valenciana. María de la IglesiaBIMCV, Banco de Imagen Medica de la Comunidad Valenciana. María de la Iglesia
BIMCV, Banco de Imagen Medica de la Comunidad Valenciana. María de la Iglesia
 
Sirris innovate2011 - Smart Products with smart data - introduction, Dr. Elen...
Sirris innovate2011 - Smart Products with smart data - introduction, Dr. Elen...Sirris innovate2011 - Smart Products with smart data - introduction, Dr. Elen...
Sirris innovate2011 - Smart Products with smart data - introduction, Dr. Elen...
 
Introduction Data Science.pptx
Introduction Data Science.pptxIntroduction Data Science.pptx
Introduction Data Science.pptx
 
Big and Small Web Data
Big and Small Web DataBig and Small Web Data
Big and Small Web Data
 
10 problems 06
10 problems 0610 problems 06
10 problems 06
 
DOWLD SLIDES.pptx
DOWLD SLIDES.pptxDOWLD SLIDES.pptx
DOWLD SLIDES.pptx
 
Big Data and Data Science: The Technologies Shaping Our Lives
Big Data and Data Science: The Technologies Shaping Our LivesBig Data and Data Science: The Technologies Shaping Our Lives
Big Data and Data Science: The Technologies Shaping Our Lives
 

Mehr von Kathirvel Ayyaswamy

22cs201 COMPUTER ORGANIZATION AND ARCHITECTURE
22cs201 COMPUTER ORGANIZATION AND ARCHITECTURE22cs201 COMPUTER ORGANIZATION AND ARCHITECTURE
22cs201 COMPUTER ORGANIZATION AND ARCHITECTUREKathirvel Ayyaswamy
 
20CS2021-Distributed Computing module 2
20CS2021-Distributed Computing module 220CS2021-Distributed Computing module 2
20CS2021-Distributed Computing module 2Kathirvel Ayyaswamy
 
Recent Trends in IoT and Sustainability
Recent Trends in IoT and SustainabilityRecent Trends in IoT and Sustainability
Recent Trends in IoT and SustainabilityKathirvel Ayyaswamy
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network SecurityKathirvel Ayyaswamy
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network SecurityKathirvel Ayyaswamy
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network SecurityKathirvel Ayyaswamy
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security 18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security Kathirvel Ayyaswamy
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network SecurityKathirvel Ayyaswamy
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network SecurityKathirvel Ayyaswamy
 
20CS024 Ethics in Information Technology
20CS024 Ethics in Information Technology20CS024 Ethics in Information Technology
20CS024 Ethics in Information TechnologyKathirvel Ayyaswamy
 

Mehr von Kathirvel Ayyaswamy (20)

22CS201 COA
22CS201 COA22CS201 COA
22CS201 COA
 
22cs201 COMPUTER ORGANIZATION AND ARCHITECTURE
22cs201 COMPUTER ORGANIZATION AND ARCHITECTURE22cs201 COMPUTER ORGANIZATION AND ARCHITECTURE
22cs201 COMPUTER ORGANIZATION AND ARCHITECTURE
 
22CS201 COA
22CS201 COA22CS201 COA
22CS201 COA
 
18CS3040_Distributed Systems
18CS3040_Distributed Systems18CS3040_Distributed Systems
18CS3040_Distributed Systems
 
20CS2021-Distributed Computing module 2
20CS2021-Distributed Computing module 220CS2021-Distributed Computing module 2
20CS2021-Distributed Computing module 2
 
18CS3040 Distributed System
18CS3040 Distributed System	18CS3040 Distributed System
18CS3040 Distributed System
 
20CS2021 Distributed Computing
20CS2021 Distributed Computing 20CS2021 Distributed Computing
20CS2021 Distributed Computing
 
20CS2021 DISTRIBUTED COMPUTING
20CS2021 DISTRIBUTED COMPUTING20CS2021 DISTRIBUTED COMPUTING
20CS2021 DISTRIBUTED COMPUTING
 
18CS3040 DISTRIBUTED SYSTEMS
18CS3040 DISTRIBUTED SYSTEMS18CS3040 DISTRIBUTED SYSTEMS
18CS3040 DISTRIBUTED SYSTEMS
 
Recent Trends in IoT and Sustainability
Recent Trends in IoT and SustainabilityRecent Trends in IoT and Sustainability
Recent Trends in IoT and Sustainability
 
20CS2008 Computer Networks
20CS2008 Computer Networks 20CS2008 Computer Networks
20CS2008 Computer Networks
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security 18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security
 
18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security18CS2005 Cryptography and Network Security
18CS2005 Cryptography and Network Security
 
20CS2008 Computer Networks
20CS2008 Computer Networks20CS2008 Computer Networks
20CS2008 Computer Networks
 
20CS2008 Computer Networks
20CS2008 Computer Networks 20CS2008 Computer Networks
20CS2008 Computer Networks
 
20CS024 Ethics in Information Technology
20CS024 Ethics in Information Technology20CS024 Ethics in Information Technology
20CS024 Ethics in Information Technology
 

Kürzlich hochgeladen

UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICSUNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICSrknatarajan
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdfankushspencer015
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduitsrknatarajan
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfKamal Acharya
 
Vivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design SpainVivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design Spaintimesproduction05
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...Call Girls in Nagpur High Profile
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Christo Ananth
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptDineshKumar4165
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college projectTonystark477637
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)simmis5
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 

Kürzlich hochgeladen (20)

UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICSUNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduits
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
 
NFPA 5000 2024 standard .
NFPA 5000 2024 standard                                  .NFPA 5000 2024 standard                                  .
NFPA 5000 2024 standard .
 
Vivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design SpainVivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design Spain
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college project
 
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 

DBMS

  • 1. Anna University Sponsored FDP on CS8492 - Database Management Systems Topics: Recent Research Perspective in Different Database Management Systems, Importance of DBMS in Digital India Date: 06.12.2018 Venue: University College of Engineering Tindivanam, Tindivanam. Dr.A.Kathirvel,Professor & HoD, Computer Science & Engg. M N M Jain Engineering College, Chennai
  • 2. Data Mining & Big Data Challenges and Research Opportunities
  • 3. 3 10 Challenging Problems in Data Mining Research
  • 4. 4 1. Developing a Unifying Theory of Data Mining • The current state of the art of data-mining research is too ``ad- hoc“ – techniques are designed for individual problems – no unifying theory • Needs unifying research – Exploration vs explanation • Long standing theoretical issues – How to avoid spurious correlations? • Deep research – Knowledge discovery on hidden causes? – Similar to discovery of Newton’s Law? An Example (from Tutorial Slides by Andrew Moore ): •VC dimension. If you've got a learning algorithm in one hand and a dataset in the other hand, to what extent can you decide whether the learning algorithm is in danger of overfitting or underfitting? – formal analysis into the fascinating question of how overfitting can happen, – estimating how well an algorithm will perform on future data that is solely based on its training set error, – a property (VC dimension) of the learning algorithm. VC-dimension thus gives an alternative to cross- validation, called Structural Risk Minimization (SRM), for choosing classifiers. – CV,SRM, AIC and BIC.
  • 5. 2. Scaling Up for High Dimensional Data and High Speed Streams • Scaling up is needed – ultra-high dimensional classification problems (millions or billions of features, e.g., bio data) – Ultra-high speed data streams • Streams – continuous, online process – e.g. how to monitor network packets for intruders? – concept drift and environment drift? – RFID network and sensor network data Excerpt from Jian Pei’s Tutorial http://www.cs.sfu.ca/~jpei/ 5
  • 6. 3. Sequential and Time Series Data • How to efficiently and accurately cluster, classify and predict the trends ? • Time series data used for predictions are contaminated by noise – How to do accurate short-term and long-term predictions? – Signal processing techniques introduce lags in the filtered data, which reduces accuracy – Key in source selection, domain knowledge in rules, and optimization methods Real time series data obtained from Wireless sensors in Hong Kong UST CS department hallway 6
  • 7. 4. Mining Complex Knowledge from Complex Data • Mining graphs • Data that are not i.i.d. (independent and identically distributed) – many objects are not independent of each other, and are not of a single type. – mine the rich structure of relations among objects, – E.g.: interlinked Web pages, social networks, metabolic networks in the cell • Integration of data mining and knowledge inference – The biggest gap: unable to relate the results of mining to the real-world decisions they affect - all they can do is hand the results back to the user. • More research on interestingness of knowledge Author (Paper1)Title Conference Name 7
  • 8. 5. Data Mining in a Network Setting • Community and Social Networks – Linked data between emails, Web pages, blogs, citations, sequences and people – Static and dynamic structural behavior • Mining in and for Computer Networks – detect anomalies (e.g., sudden traffic spikes due to a DoS (Denial of Service) attacks – Need to handle 10Gig Ethernet links (a) detect (b) trace back (c ) drop packet Picture from Matthew Pirretti’s slides,penn state An Example of packet streams (data courtesy of NCSA, UIUC) 8
  • 9. 6. Distributed Data Mining and Mining Multi-agent Data • Need to correlate the data seen at the various probes (such as in a sensor network) • Adversary data mining: deliberately manipulate the data to sabotage them (e.g., make them produce false negatives) • Game theory may be needed for help • Games Player 1:miner 9 Player 2 Action: H H T H T T (-1,1) (-1,1)(1,-1) (1,-1) Outcome
  • 10. 7. Data Mining for Biological and Environmental Problems • New problems raise new questions • Large scale problems especially so – Biological data mining, such as HIV vaccine design – DNA, chemical properties, 3D structures, and functional properties → need to be fused – Environmental data mining – Mining for solving the energy crisis 10
  • 11. 8. Data-mining-Process Related Problems • How to automate mining process? – the composition of data mining operations – Data cleaning, with logging capabilities – Visualization and mining automation • Need a methodology: help users avoid many data mining mistakes – What is a canonical set of data mining operations? Sampling Feature Sel Mining… 11
  • 12. 9. Security, Privacy and Data Integrity • How to ensure the users privacy while their data are being mined? • How to do data mining for protection of security and privacy? • Knowledge integrity assessment – Data are intentionally modified from their original version, in order to misinform the recipients or for privacy and security – Development of measures to evaluate the knowledge integrity of a collection of • Data • Knowledge and patterns http://www.cdt.org/privacy/ Headlines (Nov 21 2005) Senate Panel Approves Data Security Bill - The Senate Judiciary Committee on Thursday passed legislation designed to protect consumers against data security failures by, among other things, requiring companies to notify consumers when their personal information has been compromised. While several other committees in both the House and Senate have their own versions of data security legislation, S. 1789 breaks new ground by including provisions permitting consumers to access their personal files … 12
  • 13. 10. Dealing with Non-static, Unbalanced and Cost-sensitive Data • The UCI datasets are small and not highly unbalanced • Real world data are large (10^5 features) but only < 1% of the useful classes (+’ve) •There is much information on costs and benefits, but no overall model of profit and loss •Data may evolve with a bias introduced by sampling • Each test incurs a cost • Data extremely unbalanced • Data change with time temperature 39oc 13 pressure ? blood test ? cardiogram ? essay ?
  • 14. New Challenges • Privacy-preserving data mining • Data mining over compartmentalized databases
  • 15. Inducing Classifiers over Privacy Preserved Numeric Data 30 | 25K | … 50 | 40K | … becomeRs andomizer 65 (30+35) 65 | 50K | … Randomizer 35 | 60K | … Reconstruct Age Distribution Reconstruct Salary Distribution Decision Tree Algorithm Model 30 Alice’s age Alice’s salary John’s age
  • 16. Other Recent Work • Cryptographic approach to privacy-preserving data mining – Lindell & Pinkas, Crypto 2000 • Privacy-Preserving discovery of association rules – Vaidya & Clifton, KDD2002 – Evfimievski et.Al, KDD 2002 – Rizvi & Haritsa, VLDB 2002
  • 17. Credit Agencie s Crimina l Records Demo- graphi c Birth Marriage Phone Email "Frequent Traveler" Rating Model State Local Computation over Compartmentalized Databases Randomized Data Shipping Local computations followed by combination of partial models On-demand secure data shipping and data composition
  • 18. Some Hard Problems • Past may be a poor predictor of future – Abrupt changes – Wrong training examples • Actionable patterns (principled use of domain knowledge?) • Over-fitting vs. not missing the rare nuggets • Richer patterns • Simultaneous mining over multiple data types • When to use which algorithm? • Automatic, data-dependent selection of algorithm parameters
  • 19. Discussion • Should data mining be viewed as “rich’’ querying and “deeply’’ integrated with database systems? – Most of current work make little use of database functionality • Should analytics be an integral concern of database systems? • Issues in data mining over heterogeneous data repositories (Relationship to the heterogeneous systems discussion)
  • 20. 20 Summary • Developing a Unifying Theory of Data Mining • Scaling Up for High Dimensional Data/High Speed Streams • Mining Sequence Data and Time Series Data • Mining Complex Knowledge from Complex Data • Data Mining in a Network Setting • Distributed Data Mining and Mining Multi-agent Data • Data Mining for Biological and Environmental Problems • Data-Mining-Process Related Problems • Security, Privacy and Data Integrity • Dealing with Non-static, Unbalanced and Cost-sensitive Data
  • 21. Summary • Data mining has shown promise but needs much more further research We stand on the brink of great new answers, but even more, of great new questions -- Matt Ridley
  • 23. What are we going to understand • What is Big Data? • Why we landed up there? • To whom does it matter • Where is the money? • Are we ready to handle it? • What are the concerns? • Tools and Technologies – Is Big Data <=> Hadoop
  • 24. Simple to start • What is the maximum file size you have dealt so far? – Movies/Files/Streaming video that you have used? – What have you observed? • What is the maximum download speed you get? • Simple computation – How much time to just transfer.
  • 25. What is big data? • “Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few. This data is “big data.”
  • 26. Huge amount of data • There are huge volumes of data in the world: – From the beginning of recorded time until 2003, – We created 5 billion gigabytes (exabytes) of data. – In 2011, the same amount was created every two days – In 2013, the same amount of data is created every 10 minutes.
  • 27. Big data spans three dimensions: Volume, Velocity and Variety • Volume: Enterprises are awash with ever-growing data of all types, easily amassing terabytes—even petabytes—of information. – Turn 12 terabytes of Tweets created each day into improved product sentiment analysis – Convert 350 billion annual meter readings to better predict power consumption • Velocity: Sometimes 2 minutes is too late. For time-sensitive processes such as catching fraud, big data must be used as it streams into your enterprise in order to maximize its value. – Scrutinize 5 million trade events created each day to identify potential fraud – Analyze 500 million daily call detail records in real-time to predict customer churn faster – The latest I have heard is 10 nano seconds delay is too much. • Variety: Big data is any type of data - structured and unstructured data such as text, sensor data, audio, video, click streams, log files and more. New insights are found when analyzing these data types together. – Monitor 100’s of live video feeds from surveillance cameras to target points of interest – Exploit the 80% data growth in images, video and documents to improve customer satisfaction
  • 28. Finally…. `Big- Data’is similar to ‘Small-data’but bigger .. But having data bigger it requires different approaches: Techniques, tools, architecture … with an aim to solve new problems Or old problems in a better way
  • 29. Whom does it matter • Research Community ☺ • Business Community - New tools, new capabilities, new infrastructure, new business models etc., • On sectors Financial Services..
  • 30. How are revenues looking like….
  • 31. The Social Layer in an Instrumented Interconnected World 2+ billion people on the Web by end 2011 30 billion RFID tags today (1.3B in 2005) 4.6 billion camera phones world wide 100s of millions of GPS enabled devices sold annually 76 million smart meters in 2009… 200M by 2014 12+ TBs of tweet data every day 25+ TBs of log data every day ?TBsof dataeveryday
  • 32. What does Big Data trigger? • From “Big Data and the Web: Algorithms for Data Intensive Scalable Computing”, Ph.D Thesis, Gianmarco
  • 33. BIG DATA is not just HADOOP Understand and navigate federated big data sources Manage & store huge volume of any data Structure and control data Federated Discovery and Navigation Hadoop File System MapReduce Data Warehousing Manage streaming data Stream Computing Analyze unstructured data Integrate and govern all data sources Text Analytics Engine Integration, Data Quality, Security, Lifecycle Management, MDM
  • 34. Types of tools typically used in Big Data Scenario • Where is the processing hosted? – Distributed server/cloud • Where data is stored? – Distributed Storage (eg:Amazon s3) • Where is the programming model? – Distributed processing (Map Reduce) • How data is stored and indexed? – High performance schema free database • What operations are performed on the data? – Analytic/Semantic Processing (Eg. RDF/OWL)
  • 35. When dealing with Big Data is hard • When the operations on data are complex: – Eg. Simple counting is not a complex problem. – Modeling and reasoning with data of different kinds can get extremely complex • Good news with big-data: – Often, because of the vast amount of data, modeling techniques can get simpler (e.g., smart counting can replace complex model-based analytics)… – …as long as we deal with the scale.
  • 36. Why Big-Data? • Key enablers for the appearance and growth of ‘Big-Data’are: – Increase in storage capabilities – Increase in processing power – Availability of data
  • 37. BIG DATA: CHALLEGES AND OPPORTUNITIES
  • 38. 38 WHAT IS BIG DATA?  Big Data is a popular term used to denote the exponential growth and availability of data - both structured and unstructured. Very important! Why? More data leads to better analysis enabling better decisions!
  • 39. 39 Definition of BIG DATA • As far back as 2001, industry analyst Doug Laney (currently with Gartner) articulated the now mainstream definition of big data as three Vs: Volume, Velocityand Variety. Let us look at each one of these items.
  • 40. Definition of BIG DATA: Volume • Factors that contribute to increase in volume of data: – Transactions-based data stored through the years. – Unstructured data streaming in from social media – Ever increasing amounts of sensor and machine-to-machine data being collected 40
  • 41. 41 Definition of BIG DATA: Volume.. • Previously data storage was a big issue ; today it is not. But other issues emerge such as: • How to determine relevance within large data volumes? • How to use analytics to create value from relevant data.
  • 42. 42 Definition of BIG DATA:Velocity • Data is streaming in at unprecedented speed and must be dealt with in a timely manner. • RFID tags, sensors and smart metering are driving the need to deal with torrents of data in near-real time. • Reacting quickly enough to deal with data velocity is a challenge for most organizations.
  • 43. 43 Definition of BIG DATA: Variety • Data today comes in all types of formats. Structured, numeric data in traditional databases. • Information created from line-of-business applications; Unstructured text documents, email, video, audio, stock ticker data and financial transactions. • Managing, merging and governing different varieties of data is something many organizations still grapple with.
  • 44. 44 Definition of BIG DATA: Others • Variability: In addition to the increasing velocities and varieties of data, data flows can be highly inconsistent with periodic peaks. • Is something trending in social media? Daily, seasonal and event-triggered peak data loads can be challenging to manage. • Even more so when unstructured data is involved.
  • 45. 45 Definition of BIG DATA: Others • Complexity: Today's data comes from diverse sources. • It is still an undertaking to link, match, cleanse and transform data across systems. • It is necessary to connect and correlate relationships, hierarchies and multiple data linkages or our data can quickly spiral out of control. • PRIVACY: Associated Problems
  • 46. 46 BIG PROBLEMS • The problems start right away during data acquisition, when the data tsunami requires us to make decisions, currently in an ad hoc manner, about: • what data to keep? • what to discard? • how to store? • what we keep reliably with the right metadata. • Much data today is not in structured format; • for example, tweets and blogs are weakly structured pieces of text, while images and video are structured for storage and display, but not for semantic content and search. • transforming such content into a structured format for later analysis is a major challenge.
  • 47. 50 What has been Achieved… • During the last 35 years, data management principles such as: • physical and logical independence, • declarative querying and • cost-based optimization have enabled the first round of business intelligence applications and laid the foundation for managing and analyzing Big Data today
  • 48. 48 What is to be done? • The many novel challenges and opportunities associated with Big Data necessitate rethinking many aspects of these data management platforms, while retaining other desirable aspects. • Appropriate research & investment in Big Data will lead to a new wave of fundamental technological advances that will be embodied in the next generations of Big Data management and analysis platforms, products and systems.
  • 49. 49 Big Data: Opportunity • In a broad range of application areas, data is being collected at unprecedented scale. • Decisions that previously were based on guesswork, or on painstakingly constructed models of reality, can now be made based on the data itself. • Such Big Data analysis now drives nearly every aspect of our modern society, including mobile services, retail, manufacturing, financial services, life sciences, and physical sciences.
  • 50. 50 Big Data: Opportunity • Scientific research has been revolutionized by Big Data. • The Sloan Digital Sky Survey has today become a central resource for astronomers the world over. • The field of Astronomy is being transformed from one where taking pictures of the sky was a large part of an astronomer’s job to one where the pictures are all in a database already and the astronomer’s task is to find interesting objects and phenomena in the database.
  • 51. 51 Big Data: Opportunity • In the biological sciences, there is now a well-established tradition of depositing scientific data into a public repository, and also of creating public databases for use by other scientists. • There is an entire discipline of bioinformatics that is largely devoted to the curation and analysis of such data. • As technology advances, particularly with the advent of Next Generation Sequencing, the size and number of experimental data sets available is increasing exponentially.
  • 52. 55 Big Data: Opportunities • Big Data has the potential to revolutionize not just research, but also education. • A recent detailed quantitative comparison of different approaches taken by 35 charter schools in NYC has found that one of the top five policies correlated with measurable academic effectiveness was the use of data to guide instruction . • Imagine a world in which we have access to a huge database where we collect every detailed measure of every student's academic performance
  • 53. 53 Big Data: Opportunities • This data could be used to design the most effective approaches to education, starting from reading, writing, and math, to advanced, college- level, courses. • We are far from having access to such data, but there are powerful trends in this direction. • In particular, there is a strong trend for massive Web deployment of educational activities, and this will generate an increasingly large amount of detailed data about students' performance
  • 54. 54 Big Data: Opportunities: Health Care • It is widely believed that the use of information technology: • can reduce the cost of healthcare • improve its quality by making care more preventive and personalized and basing it on more extensive (home-based) continuous monitoring
  • 55. 55 Big Data: Opportunities:Others • Effective use of Big Data for urban planning (through fusion of high- fidelity geographical data) • Intelligent transportation (through analysis and visualization of live and detailed road network data) • Environmental modeling (through sensor networks ubiquitously collecting data) • Energy saving (through unveiling patterns of use) • Smart materials (through the new materials genome initiative), • Computational social sciences
  • 56. 56 Big Data: Opportunities:Others • financial systemic risk analysis (through integrated analysis of a web of contracts to find dependencies between financial entities) • homeland security (through analysis of social networks and financial transactions of possible terrorists) • computer security (through analysis of logged information and other events, known as Security Information and Event Management (SIEM))
  • 57. 60 Challenges • The sheer size of the data, Data Volume of course, is a major challenge, and is the one that is most easily recognized. • However, there are others. Industry analysis companies like to point out that there are challenges not just in Volume, but also in Variety and Velocity. • While these three are important, this short list fails to include additional important requirements such as privacy and usability.
  • 58. BIG DATA PIPE LINE 58
  • 59. 59 The FIVE PHASES • There are five distinct phases in handling Big Data • Acquisition/Recording • Extraction, Cleaning/Annotation • Integration/Aggregation/Representation • Analysis/Modeling • Interpretation
  • 60. 60 Data Acquisition and Recording(1/3) • Big Data does not arise out of a vacuum: it is recorded from some data generating source. • Scientific experiments and simulations can easily produce petabytes of data today. • Much of this data is of no interest, and it can be filtered and compressed by orders of magnitude. • One challenge is to define these filters in such a way that they do not discard useful information
  • 61. 61 Data Acquisition and Recording(2/3) • The second big challenge is to automatically generate the right metadata to describe what data is recorded and how it is recorded and measured. • For example, in scientific experiments, considerable detail regarding specific experimental conditions and procedures may be required to be able to interpret the results correctly, and it is important that such metadata be recorded with observational data. Metadata acquisition systems can minimize the human burden in recording metadata.
  • 62. 62 Data Acquisition and Recording(3/3)  Another important issue here is data provenance.  Recording information about the data at its birth is not useful unless this information can be interpreted and carried along through the data analysis pipeline.  For example, a processing error at one step can render subsequent analysis useless; with suitable provenance, we can easily identify all subsequent processing that dependent on this step.  Thus we need research both into generating suitable metadata and into data systems that carry the provenance of metadata through data analysis pipelines.
  • 63. 63 Provenance:meaning • The place of origin or earliest known history of something. • The history of the ownership of an object, especially when documented or authenticated. • Provenance Sentence Example: • Items with a known provenance excluding them from further scrutiny.
  • 64. 64 Information Extraction and Cleaning (1/4)  Frequently, the information collected will not be in a format ready for analysis.  For example, consider the collection of electronic health records in a hospital, comprising: transcribed dictations from several physicians, structured data from sensors measurements (possibly with some associated uncertainty)
  • 65. 65 Information Extraction and Cleaning (2/4) • Image data such as x-rays. • We cannot leave the data in this form and still effectively analyze it. • we require an information extraction process that pulls out the required information from the underlying sources and expresses it in a structured form suitable for analysis
  • 66. 66 Information Extraction and Cleaning (3/4) • Doing this correctly and completely is a continuing technical challenge. • Note that this data also includes images and will in the future include video; such extraction is often highly application dependent (e.g., what you want to pull out of an MRI is very different from what you would pull out of a picture of the stars, or a surveillance photo). • In addition, due to the ubiquity of surveillance cameras and popularity of GPS-enabled mobile phones, cameras, and other portable devices, rich and high fidelity location and trajectory (i.e., movement in space) data can also be extracted.
  • 67. 70 Information Extraction and Cleaning (4/4) • We are used to thinking of Big Data as always telling us the truth, but this is actually far from reality. • For example, patients may choose to hide risky behavior and care givers may sometimes mis-diagnose a condition; patients may also inaccurately recall the name of a drug or even that they ever took it, leading to missing information in (the history portion of) their medical record. • Existing work on data cleaning assumes well-recognized constraints on valid data or well-understood error models; for many emerging Big Data domains these do not exist.
  • 68. 68 Data Integration, Aggregation, and Representation(1/4) • Data analysis is considerably more challenging than simply locating, identifying, understanding, and citing data. • For effective large-scale analysis all of this has to happen in a completely automated manner. This requires differences in data structure and semantics to be expressed in forms that are computer understandable, and then “robotically” resolvable. • There is a strong body of work in data integration that can provide some of the answers. However, considerable additional work is required to achieve automated error-free difference resolution.
  • 69. 69 Data Integration,Aggregation, and Representation (2/4) • Even for simpler analyses that depend on only one data set, there remains an important question of suitable database design. • Usually, there will be many alternative ways in which to store the same information. • Certain designs will have advantages over others for certain purposes, and possibly drawbacks for other purposes.
  • 70. 70 Data Integration,Aggregation, and Representation(3/4) • Witness, for instance, the tremendous variety in the structure of bioinformatics databases with information regarding substantially similar entities, such as genes. • Database design is today an art, and is carefully executed in the enterprise context by highly-paid professionals. • We must enable other professionals, such as domain scientists, to create effective database designs, either through devising tools to assist them in the design process or through forgoing the design process completely and developing techniques so that databases can be used effectively in the absence of intelligent database design.
  • 71. 71 Data Integration,Aggregation, and Representation (4/4) • Database design is today an art, and is carefully executed in the enterprise context by highly-paid professionals. • We must enable other professionals, such as domain scientists, to create effective database designs, either through devising tools to assist them in the design process or through forgoing the design process completely and developing techniques so that databases can be used effectively in the absence of intelligent database design.
  • 72. 72 Query Processing, Data Modeling, and Analysis(1/6) • Methods for querying and mining Big Data are fundamentally different from traditional statistical analysis on small samples. • Big Data is often noisy, dynamic, heterogeneous, inter-related and untrustworthy. • Nevertheless, even noisy Big Data could be more valuable than tiny samples because general statistics obtained from frequent patterns and correlation analysis usually overpower individual fluctuations and often disclose more reliable hidden patterns and knowledge.
  • 73. 73 Query Processing, Data Modeling, and Analysis (2/6) • Further, interconnected Big Data forms large heterogeneous information networks, with which information redundancy can be explored to compensate for : – missing data, – to crosscheck conflicting cases, – to validate trustworthy relationships, – to disclose inherent clusters, and – to uncover hidden relationships and models
  • 74. 74 Query Processing, Data Modeling, and Analysis (3/6) • Mining requires Integrated, cleaned, trustworthy, and efficiently accessible data, declarative query and mining interfaces, scalable mining algorithms, and big-data computing environments. • At the same time, data mining itself can also be used to help improve the quality and trustworthiness of the data, understand its semantics, and provide intelligent querying functions.
  • 75. 75 Query Processing, Data Modeling, and Analysis (4/6) • Real-life medical records have errors, are heterogeneous, and are distributed across multiple systems. • The value of Big Data analysis in health care, to take just one example application domain, can only be realized if it can be applied robustly under these difficult conditions. On the flip side, knowledge developed from data can help in correcting errors and removing ambiguity.
  • 76. 76 Query Processing, Data Modeling, and Analysis (5/6) • For example, a physician may write “DVT” as the diagnosis for a patient. This abbreviation is commonly used for both “deep vein thrombosis” and “diverticulitis,” two very different medical conditions. • A knowledge-base constructed from related data can use associated symptoms or medications to determine which of two the physician meant.
  • 77. 80 Query Processing, Data Modeling, and Analysis (6/6) • A problem with current Big Data analysis is the lack of coordination between database systems, which host the data and provide SQL querying, with analytics packages that perform various forms of non-SQL processing, such as data mining and statistical analyses. • Today’s analysts are impeded by a tedious process of exporting data from the database, performing a non-SQL process and bringing the data back. This is an obstacle to carrying over the interactive elegance of the first generation of SQL-driven OLAP systems into the data mining type of analysis that is in increasing demand. • A tight coupling between declarative query languages and the functions of such packages will benefit both expressiveness and performance of the analysis.
  • 78. 78 Interpretation (1/5) • Having the ability to analyze Big Data is of limited value if users cannot understand the analysis. Ultimately, a decision-maker, provided with the result of analysis, has to interpret these results. • This interpretation cannot happen in a vacuum. • it involves examining all the assumptions made and retracing the analysis.
  • 79. 79 Interpretation (2/5) • Further, there are many possible sources of error: computer systems can have bugs, models almost always have assumptions, and results can be based on erroneous data. • For all of these reasons, no responsible user will cede authority to the computer system • Analyst will try to understand, and verify, the results produced by the computer. • The computer system must make it easy for her to do so. This is particularly a challenge with Big Data due to its complexity. There are often crucial assumptions behind the data recorded.
  • 80. 80 Interpretation (3/5) • There are often crucial assumptions behind the data recorded. • Analytical pipelines can often involve multiple steps, again with assumptions built in. • The recent mortgage-related shock to the financial system dramatically underscored the need for such decision-maker diligence -- rather than accept the stated solvency of a financial institution at face value, a decision- maker has to examine critically the many assumptions at multiple stages of analysis.
  • 81. 81 Interpretation (4/5) • In short, it is rarely enough to provide just the results. • one must provide supplementary information that explains how each result was derived, and based upon precisely what inputs. • Such supplementary information is called the provenance of the (result) data.
  • 82. 82 Interpretation (5/5) • Furthermore, with a few clicks the user should be able to drill down into each piece of data that she sees and understand its provenance, which is a key feature to understanding the data. • That is, users need to be able to see not just the results, but also understand why they are seeing those results. • However, raw provenance, particularly regarding the phases in the analytics pipeline, is likely to be too technical for many users to grasp completely. • One alternative is to enable the users to “play” with the steps in the analysis – make small changes to the pipeline, for example, or modify values for some parameters.
  • 83. 83 Challenges in Big Data Analysis Heterogeneity and Incompleteness (1/5) • When humans consume information, a great deal of heterogeneity is comfortably tolerated • The nuance and richness of natural language can provide valuable depth. • Machine analysis algorithms expect homogeneous data, and cannot understand nuance. • Hence data must be carefully structured as a first step in (or prior to) data analysis. • Example: A patient who has multiple medical procedures at a hospital. We could create one record per medical procedure or laboratory test, one record for the entire hospital stay, or one record for all lifetime hospital interactions of this patient. With anything other than the first design, the number of medical procedures and lab tests per record would be different for each patient.
  • 84. 84 Challenges in Big Data Analysis Heterogeneity and Incompleteness (2/5) • The three design choices listed have successively less structure and, conversely, successively greater variety. • Greater structure is likely to be required by many (traditional) data analysis systems. • However, the less structured design is likely to be more effective for many purposes – for example questions relating to disease progression over time will require an expensive join operation with the first two designs, but can be avoided with the latter. • However, computer systems work most efficiently if they can store multiple items that are all identical in size and structure. Efficient representation, access, and analysis of semi-structured data require further work
  • 85. 85 Challenges in Big DataAnalysis Heterogeneity and Incompleteness (3/5) • Consider an electronic health record database design that has fields for birth date, occupation, and blood type for each patient. • What do we do if one or more of these pieces of information is not provided by a patient? • Obviously, the health record is still placed in the database, but with the corresponding attribute values being set to NULL..
  • 86. 86 Challenges in Big Data Analysis Heterogeneity and Incompleteness (4/5) • A data analysis that looks to classify patients by, say, occupation, must take into account patients for which this information is not known. • Worse, these patients with unknown occupations can be ignored in the analysis only if we have reason to believe that they are otherwise statistically similar to the patients with known occupation for the analysis performed. • For example, if unemployed patients are more likely to hide their employment status, analysis results may be skewed in that it considers a more employed population mix than exists, and hence potentially one that has differences in occupation-related health-profiles
  • 87. 90 Challenges in Big Data Analysis: Heterogeneity and Incompleteness (5/5) • Even after data cleaning and error correction, some incompleteness and some errors in data are likely to remain. • This incompleteness and these errors must be managed during data analysis. Doing this correctly is a challenge. • Recent work on managing probabilistic data suggests one way to make progress.
  • 88. 88 Challenges in Big Data Analysis: VOLUME (Scale)(1/9) • The first thing anyone thinks of with Big Data is its size. After all, the word “big” is there in the very name. Managing large and rapidly increasing volumes of data has been a challenging issue for many decades. • In the past, this challenge was mitigated by processors getting faster. • But, there is a fundamental shift underway now: data volume is scaling faster than compute resources, and CPU speeds are static.
  • 89. 89 Challenges in Big Data Analysis: VOLUME (2/9) • First, over the last five years the processor technology has made a dramatic shift - rather than processors doubling their clock cycle frequency every 18- 24 months, now, due to power constraints, clock speeds have largely stalled and processors are being built with increasing numbers of cores. • In the past, large data processing systems had to worry about parallelism across nodes in a cluster; now, one has to deal with parallelism within a single node.
  • 90. 90 Challenges in Big Data Analysis: Volume(3/9) • Unfortunately, parallel data processing techniques that were applied in the past for processing data across nodes don’t directly apply for intra-node parallelism. • this is because the architecture looks very different; for example, there are many more hardware resources such as processor caches and processor memory channels that are shared across cores in a single node.
  • 91. 91 Challenges in Big Data Analysis: VOLUME(4/9) • Further, the move towards packing multiple sockets (each with 10s of cores) adds another level of complexity for intra-node parallelism. • Finally, with predictions of “dark silicon”, namely that power consideration will likely in the future prohibit us from using all of the hardware in the system continuously, data processing systems will likely have to actively manage the power consumption of the processor. • These unprecedented changes require us to rethink how we design, build and operate data processing components.
  • 92. 92 Challenges in Big DataAnalysis: VOLUME(5/9) • The second dramatic shift that is underway is the move towards cloud computing. • Cloud computing aggregates multiple disparate workloads with varying performance goals into very large clusters. • Example:Interactive services demand that the data processing engine return back an answer within a fixed response time cap.
  • 93. 93 Challenges in Big Data Analysis: Volume(6/9) • This level of sharing of resources is expensive, • Large clusters requires new ways of determining- how to run and execute data processing jobs so that we can meet the goals of each workload cost-effectively. • how to deal with system failures, which occur more frequently as we operate on larger and larger clusters
  • 94. 94 Challenges in Big DataAnalysis VOLUME(7/9) • This places a premium on declarative approaches to expressing programs, even those doing complex machine learning tasks, since global optimization across multiple users’ programs is necessary for good overall performance. • Reliance on user-driven program optimizations is likely to lead to poor cluster utilization, since users are unaware of other users’ programs.
  • 95. 95 Challenges in Big Data Analysis VOLUME(8/9) • A third dramatic shift that is underway is the transformative change of the traditional I/O subsystem. • For many decades, hard disk drives (HDDs) were used to store persistent data. • HDDs had far slower random IO performance than sequential IO performance. • Data processing engines formatted their data and designed their query processing methods to “work around” this limitation.
  • 96. 96 Challenges in Big DataAnalysis: VOLUME(9/9) • But, HDDs are increasingly being replaced by solid state drives today. • Other technologies such as Phase Change Memory are around the corner. • These newer storage technologies do not have the same large spread in performance between the sequential and random I/O performance. • This requires a rethinking of how we design storage subsystems for Big data processing systems.
  • 97. 100 Challenges in Big Data Analysis: Timeliness • The flip side of size is speed. • The larger the data set to be processed, the longer it will take to analyze. • The design of a system that effectively deals with size is likely also to result in a system that can process a given size of data set faster. • However, it is not just this speed that is usually meant when one speaks of Velocity in the context of Big Data. Rather, there is an acquisition rate challenge, and a timeliness challenge.
  • 98. 100 Challenges in Big Data Analysis: Privacy • The Privacy of data isanother huge concern in the context of big data. •There are many additional challenging research problems. •For example, we do not know yet how to share private data while limiting disclosure and ensuring sufficient data utility in the shared data. •The existing paradigm of differential privacy is a very important step in the right direction, but it unfortunately reduces information content too far in order to be useful in most practical cases.
  • 99. 100 Challenges in Big Data Analysis: Human Collobaration •There remain many patterns that humans can easily detect but computer algorithms have a hard time finding. •Ideally, analytics for Big Data will not be all computational – rather it will be designed explicitly to have a human in the loop. •The new sub-field of visual analytics is attempting to do this, at least with respect to the modeling and analysis phase in the pipeline. •There is similar value to human input at all stages of the analysis pipeline