Over the past year, YipitData spearheaded a full migration of its data pipelines to Apache Spark via the Databricks platform. Databricks now empowers its 40+ data analysts to independently create data ingestion systems, manage ETL workflows, and produce meaningful financial research for our clients.
6. YipitData Answers Key Investor Questions
▪ 70+ research products covering U.S. and international companies
▪ Email reports, excel files, and data downloads
▪ Transaction data, web data, app data, targeted interviews, and adding more
▪ Clients include over 200 investment funds and fortune 500 companies
▪ 53 data analysts and 3 data engineers
▪ 22 engineers total
▪ We are rapidly growing and hiring!
7. About Me
▪ Senior Software Engineer
▪ Manage platform for ETL
workloads
▪ Based out of NYC
▪ linkedin.com/in/anupsegu
8. We Want Analysts To Own The Product
Data
Collection
Data
Exploration
ETL
Workflows
Report
Generation
www
SELECT *
FROM ...
13. Wide range of
table sizes and schemas
1 PB
Compressed Parquet
60 K
Tables
1.7 K
Databases
14. Readypipe: From URLs To Parquet
▪ Repeatedly capture a snapshot of
the website
▪ Websites frequently change
▪ Makes data available quickly for
analysis
15. Glue Metastore
Streaming As JSON Is Great
▪ Append only data in S3
▪ We don’t know the schema ahead of time
▪ Only flat column types
s3://{json_bucket}/{project_name}
/{table}/...
JSON Bucket Parquet Bucket
Kinesis Firehose
16. Parquet Makes Data “Queryable”
▪ Create or update databases, tables, and
schemas as needed
▪ Partitioned by the date of ingestion
▪ Spark cluster subscribed to write events
s3://{parquet_bucket}/{project_name}
/{table}/dt={date}...
JSON Bucket
Kinesis Firehose
Glue Metastore
Parquet Bucket
17. Compaction = Greater Performance
▪ Insert files into new S3 locations
▪ Update partitions in Glue
▪ Pick appropriate column lengths for optimal
file counts
s3://{parquet_bucket}/{project_name}
/{table}/compacted/dt={date}...
JSON Bucket
Kinesis Firehose
Glue Metastore
Parquet Bucket
18. With 3rd Party Data, We Strive for Uniformity
Various File
Formats
Permissions
Challenges
Data
Lineage
Data
Refreshes
403
Access Denied
19. Databricks Helps Manage 3rd Party Data
▪ Upload files and convert to parquet with
additional metadata
▪ Configure data access by assuming IAM roles
within notebooks
26. Wide Range Of Options For Spark Clusters
Hardware Permissions Spark Configuration
Driver instance Metastore Runtime
Worker instances S3 access Spark properties
EBS Volumes IAM Roles Environment Variables
27. Wide Range Of Options For Spark Clusters
Hardware Permissions Spark Configuration
Driver instance Metastore Runtime
Worker instances S3 access Spark properties
EBS Volumes IAM Roles Environment Variables
28. T-Shirt Sizes For Clusters
▪ 3 r5.xlarge instances
▪ Warm instance pool
for fast starts
▪ 10 r5.xlarge instances
▪ Larger EBS volumes
available if needed
“MEDIUM”“SMALL”
▪ 30 r5.xlarge instances
Larger EBS volumes for
heavy workloads
“LARGE”
Standard IAM Roles, Metastore, S3 access, and Environment Variables
31. Databricks Does The Heavy Lifting
▪ Provisions compute resources via a REST API
▪ Scales instances for cluster load
▪ Applies a wide range of spark optimizations
39. Automatically Create Workflows
▪ Pipelines are deployed without engineers
▪ Robust logging and error handling
▪ Easy to modify DAGs
▪ All happens within databricks
Task A
Task B
Task C
44. A Platform Invites New Solutions
▪ Establish standard queries and notebooks
▪ Trigger one DAG from one another
▪ Trigger reporting processes after ETL jobs