Data science remains a high-touch activity, especially in life, physical, and social sciences. Data management and manipulation tasks consume too much bandwidth: Specialized tools and technologies are difficult to use together, issues of scale persist despite the Cambrian explosion of big data systems, and public data sources (including the scientific literature itself) suffer curation and quality problems.
Together, these problems motivate a research agenda around “human-data interaction:” understanding and optimizing how people use and share quantitative information.
I’ll describe some of our ongoing work in this area at the University of Washington eScience Institute.
In the context of the Myria project, we're building a big data "polystore" system that can hide the idiosyncrasies of specialized systems behind a common interface without sacrificing performance. In scientific data curation, we are automatically correcting metadata errors in public data repositories with cooperative machine learning approaches. In the Viziometrics project, we are mining patterns of visual information in the scientific literature using machine vision, machine learning, and graph analytics. In the VizDeck and Voyager projects, we are developing automatic visualization recommendation techniques. In graph analytics, we are working on parallelizing best-of-breed graph clustering algorithms to handle multi-billion-edge graphs.
The common thread in these projects is the goal of democratizing data science techniques, especially in the sciences.
Call Girls Jalahalli Just Call 👗 7737669865 👗 Top Class Call Girl Service Ban...
Data Science, Data Curation, and Human-Data Interaction
1. Data Science,
Data Curation,
Human-Data Interaction
Bill Howe, Ph.D.
Associate Professor, Information School
Adjunct Associate Professor, Computer Science & Engineering
Associate Director and Senior Data Science Fellow, eScience Institute
7/26/2016 Bill Howe, UW 1
2. Dave Beck
Director of Research,
Life Sciences
Ph.D. Medicinal
Chemistry,
Biomolecular
Structure & Design
Jake VanderPlas
Director of Research,
Physical Sciences
Ph.D., Astronomy
Valentina Staneva
Data Scientist
Ph.D., Applied
Mathematics
and Statistics
Ariel Rokem
Data Scientist
Ph.D.,
Neuroscience
Andrew Gartland
Research Scientist
Ph.D., Biostatistics
Bryna Hazelton
Research Scientist
Ph.D., Physics
Bernease Herman
Data Scientist
BS, Stats
was SE at Amazon
Vaughn Iverson
Research Scientist
Ph.D., Oceanography
Rob Fatland
Director of Cloud and
Data Solutions
Senior Data Science
Fellow
PhD Geophysics
Joe Hellerstein
Senior Data Science Fellow
IBM Research,
Microsoft Research,
Google (ret.)
Data Scientists
Research Scientists
Research Faculty Cyberinfrastructure
Brittany Fiore-Gartland
Ethnographer
Ph.D Communication
Dir. Ethnography
http://escience.washington.edu
4. Processingpower
Time
What is the rate-limiting step in data understanding?
Processing power:
Moore’s Law
Human cognitive capacity
Idea adapted from “Less is More” by Bill Buxton (2001)
Amount of data in
the world
slide src: Cecilia Aragon, UW HCDE
5. How much time do you spend “handling
data” as opposed to “doing science”?
Mode answer: “90%”
7/26/2016 Bill Howe, UW 5
6.
7. 7/26/2016 Bill Howe, UW 8
Goal: Understand and optimize how people
use and share quantitative information
“Human-Data Interaction”
8. The SQLShare Corpus:
A multi-year log of hand-written SQL queries
Queries 24275
Views 4535
Tables 3891
Users 591
SIGMOD 2016
Shrainik Jain
https://uwescience.github.io/sqlshare
9. lifetime = days between first and last access of table
SIGMOD 2016
Shrainik Jain
http://uwescience.github.io/sqlshare/
Data “Grazing”: Short dataset lifetimes
21. 7/26/2016 Bill Howe, UW 33
Microarray samples submitted to the Gene Expression Omnibus
Curation is fast becoming the
bottleneck to data sharing
Maxim
Gretchkin
Hoifung
Poon
22. color = labels supplied as
metadata
clusters = 1st two PCA
dimensions on the gene
expression data itself
Can we use the expression data
directly to curate algorithmically?
Maxim
Gretchkin
Hoifung
Poon
The expression data
and the text labels
appear to disagree
24. Deep Curation Maxim
Gretchkin
Hoifung Poon
Distant supervision and co-learning between text-based
classified and expression-based classifier: Both models
improve by training on each others’ results.
Free-text classifier
Expression classifier
28. Observations
• Figures in the literature are the currency of
scientific ideas
• Almost entirely unexplored
• Our thought: Mine patterns in the visual
literature
30. Step 2: Classification
• Divide images into small patches
• Take a random sample
• Run k-means on samples (k = 200)
• For each figure in training set, generate
a length-200 feature vector by similarity
to clusters. Train a model.
• For each test image, create the vector
and classify by the model
31.
32. Do high-impact papers have fewer equations,
as indicated by Fawcett and Higginson? (Yes)
Poshen LeeJevin West
high impact papers low impact papers
53. • OCCs:Big Data / Database researcher with broad impact and expertise in research data management,
• Democratizing Data Science
– Ourselves: Reduce overhead in attention-scarce regimes
– Other fields: Reduce overhead of interdisciplinary research
– The public: Reduce overhead of communicating with the public and policymakers
• SQLShare
– Why? What? Impact?
– Key: RDM, NSF-funded, hundreds of users
– Are these workloads any different than a typical database?
• HaLoop
– Why? What? Impact?
– Key: Papers, new subfield in big data
• Myria
– Why? What? Impact?
– Key: Funding
• Viziometrics
– Why? What? Impact?
• Data Curation through an Algorithmic Lens
– Why? What? Impact?
– Volume, variety, velocity. Volume: tasks that scale with the number of records: movement, validation. Variety: tasks that scale with the number of datasets:
metadata attachment, cataloging, metadata verfication. Velocity: tasks that scale with the time since release. Data journalism, legal cases
– Example? Maxim’s work. Prevalence of missing and incorrect labels.
– Is this dataset what it says it is?
– Why? Reproducibility crisis
– Is this fully automatic? No. Training data, computational steering
57. • Available
– Can you get it if you know where to look?
• Discoverable
– Can you get it if you don’t know where to look?
• Manipulable
– What can you do with it, besides download it? Can the structure be
readily parsed and transformed?
• Interpretable
– Is the information internally consistent with respect to provenance,
metadata, column names, etc.?
• Contextualizable
– Is the information externally consistent with respect to other related
datasets? Can it be connected to other datasets through standards or
conventions? Does it admit connections to other datasets
64. • Allen Institute example:
• flexibility gap between high level and low level
– Domain-specific languages
http://casestudies.brain-
map.org/ggb#section_explorea
http://blog.ibmjstart.net/2015/08/22/dynamic
-dashboards-from-jupyter-notebooks/
66. Processingpower
Time
What is the rate-limiting step in data understanding?
Processing power:
Moore’s Law
Human cognitive capacity
Idea adapted from “Less is More” by Bill Buxton (2001)
Amount of data in
the world
slide src: Cecilia Aragon, UW HCDE
67. A Typical Data Science Workflow
1) Preparing to run a model
2) Running the model
3) Interpreting the results
Gathering, cleaning, integrating, restructuring, transforming,
loading, filtering, deleting, combining, merging, verifying,
extracting, shaping, massaging
“80% of the work”
-- Aaron Kimball
“The other 80% of the work”
68. How much time do you spend “handling
data” as opposed to “doing science”?
Mode answer: “90%”
7/26/2016 Bill Howe, UW 93