Hybrid collaborative tiered storage with alluxioThai Bui
This document discusses using Alluxio and ZFS together to provide a hybrid collaborative tiered storage solution with Amazon S3. Alluxio acts as a distributed data storage layer that can mount S3 and HDFS, providing data locality. ZFS works at the kernel level to accelerate read/write speeds by caching data in RAM and automatically promoting and demoting blocks between storage tiers like RAM, SSD, and S3. Benchmark results show the combination of ZFS and NVMe SSDs provides up to 10x faster read speeds and 4x faster write speeds compared to using just Amazon EBS, and up to 15x faster performance than directly accessing data from S3. This hybrid approach provides improved performance for analytic queries in
Improving Presto performance with Alluxio at TikTokAlluxio, Inc.
This document discusses improving the performance of Presto queries on Hive data stored in HDFS by leveraging Alluxio caching. It describes how TikTok integrated Presto with Alluxio to cache the most frequently accessed data partitions, reducing the median query latency by 41.2% and average latency by over 20% for cache hits. Custom caching strategies were developed to identify and prioritize caching the partitions consuming the most IO to maximize resource utilization and minimize cache space requirements.
RubiX: A caching framework for big data engines in the cloud. Helps provide data caching capabilities to engines like Presto, Spark, Hadoop, etc transparently without user intervention.
Alluxio Data Orchestration Platform for the CloudShubham Tagra
Alluxio originated as an open source project at UC Berkeley to orchestrate data for cloud applications by providing a unified namespace and intelligent data caching across multiple data sources. It provides consistent high performance for analytics and AI workloads running on object stores by caching frequently accessed data in memory and tiering data to flash/disk based on policies. Alluxio can also enable hybrid cloud environments by allowing on-premises workloads to burst to public clouds without data movement through "zero-copy" access to remote data.
Powering Interactive Analytics with Alluxio and PrestoAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Powering Interactive Analytics with Alluxio and Presto
Dmytro Dermanskyi, Data Engineering Lead, WalkMe
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Using Alluxio as a Fault-tolerant Pluggable Optimization Component of JD.com'...Alluxio, Inc.
JD.com is China's largest retailer that uses Alluxio as a fault-tolerant optimization component in its computation frameworks. Alluxio improves JDPresto performance by 10x on 100+ nodes by enabling data caching and reducing remote reads. Ongoing exploration includes running Alluxio on YARN for resource management, using Alluxio as a shuffle service to address disk I/O bottlenecks, and separating computing and storage across clusters for further optimization. JD has also contributed various features and fixes to Alluxio, including a new WebUI, eviction strategies, JVM monitoring, shell commands, and tests.
Optimizing Latency-Sensitive Queries for Presto at Facebook: A Collaboration ...Alluxio, Inc.
Alluxio Global Online Meetup
May 7, 2020
For more Alluxio events: https://www.alluxio.io/events/
Speakers:
Rohit Jain, Facebook
Yutian "James" Sun, Facebook
Bin Fan, Alluxio
For many latency-sensitive SQL workloads, Presto is often bound by retrieving distant data. In this talk, Rohit Jain, James Sun from Facebook and Bin Fan from Alluxio will introduce their teams’ collaboration on adding a local on-SSD Alluxio cache inside Presto workers to improve unsatisfied Presto latency.
This talk will focus on:
- Insights of the Presto workloads at Facebook w.r.t. cache effectiveness
- API and internals of the Alluxio local cache, from design trade-offs (e.g. caching granularity, concurrency level and etc) to performance optimizations.
- Initial performance analysis and timeline to deliver this feature for general Presto users.
- Discussion on our future work to optimize cache performance with deeper integration with Presto
Hybrid collaborative tiered storage with alluxioThai Bui
This document discusses using Alluxio and ZFS together to provide a hybrid collaborative tiered storage solution with Amazon S3. Alluxio acts as a distributed data storage layer that can mount S3 and HDFS, providing data locality. ZFS works at the kernel level to accelerate read/write speeds by caching data in RAM and automatically promoting and demoting blocks between storage tiers like RAM, SSD, and S3. Benchmark results show the combination of ZFS and NVMe SSDs provides up to 10x faster read speeds and 4x faster write speeds compared to using just Amazon EBS, and up to 15x faster performance than directly accessing data from S3. This hybrid approach provides improved performance for analytic queries in
Improving Presto performance with Alluxio at TikTokAlluxio, Inc.
This document discusses improving the performance of Presto queries on Hive data stored in HDFS by leveraging Alluxio caching. It describes how TikTok integrated Presto with Alluxio to cache the most frequently accessed data partitions, reducing the median query latency by 41.2% and average latency by over 20% for cache hits. Custom caching strategies were developed to identify and prioritize caching the partitions consuming the most IO to maximize resource utilization and minimize cache space requirements.
RubiX: A caching framework for big data engines in the cloud. Helps provide data caching capabilities to engines like Presto, Spark, Hadoop, etc transparently without user intervention.
Alluxio Data Orchestration Platform for the CloudShubham Tagra
Alluxio originated as an open source project at UC Berkeley to orchestrate data for cloud applications by providing a unified namespace and intelligent data caching across multiple data sources. It provides consistent high performance for analytics and AI workloads running on object stores by caching frequently accessed data in memory and tiering data to flash/disk based on policies. Alluxio can also enable hybrid cloud environments by allowing on-premises workloads to burst to public clouds without data movement through "zero-copy" access to remote data.
Powering Interactive Analytics with Alluxio and PrestoAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Powering Interactive Analytics with Alluxio and Presto
Dmytro Dermanskyi, Data Engineering Lead, WalkMe
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Using Alluxio as a Fault-tolerant Pluggable Optimization Component of JD.com'...Alluxio, Inc.
JD.com is China's largest retailer that uses Alluxio as a fault-tolerant optimization component in its computation frameworks. Alluxio improves JDPresto performance by 10x on 100+ nodes by enabling data caching and reducing remote reads. Ongoing exploration includes running Alluxio on YARN for resource management, using Alluxio as a shuffle service to address disk I/O bottlenecks, and separating computing and storage across clusters for further optimization. JD has also contributed various features and fixes to Alluxio, including a new WebUI, eviction strategies, JVM monitoring, shell commands, and tests.
Optimizing Latency-Sensitive Queries for Presto at Facebook: A Collaboration ...Alluxio, Inc.
Alluxio Global Online Meetup
May 7, 2020
For more Alluxio events: https://www.alluxio.io/events/
Speakers:
Rohit Jain, Facebook
Yutian "James" Sun, Facebook
Bin Fan, Alluxio
For many latency-sensitive SQL workloads, Presto is often bound by retrieving distant data. In this talk, Rohit Jain, James Sun from Facebook and Bin Fan from Alluxio will introduce their teams’ collaboration on adding a local on-SSD Alluxio cache inside Presto workers to improve unsatisfied Presto latency.
This talk will focus on:
- Insights of the Presto workloads at Facebook w.r.t. cache effectiveness
- API and internals of the Alluxio local cache, from design trade-offs (e.g. caching granularity, concurrency level and etc) to performance optimizations.
- Initial performance analysis and timeline to deliver this feature for general Presto users.
- Discussion on our future work to optimize cache performance with deeper integration with Presto
Hybrid data lake on google cloud with alluxio and dataprocAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Hybrid Data Lake on Google Cloud with Alluxio and Dataproc
Roderick Yao, Strategic Cloud Engineer (Google Cloud)
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
How to Develop and Operate Cloud First Data PlatformsAlluxio, Inc.
Alluxio Online Meetup
Feb 11, 2020
Speakers:
Du Li, Electronic Arts
Bin Fan, Alluxio
In cloud-based software stacks, there are varying degrees of automation across different layers: infrastructure, platform, and application. The mismatch in automation often breaks balance in devops, causing ops nightmares in platforms and applications. This talk will overview two projects at Electronic Arts (EA) that address the mismatch by data orchestration: One project automatically generates configurations for all components in a large monitoring system, which reduces the daily average number of alerts from ~1000 to ~20. The other project introduces Alluxio for caching and unifying address space across ETL and analytics workloads, which substantially simplifies architecture, improves performance, and reduces ops overheads.
What’s new in Alluxio 2: from seamless operations to structured data managementAlluxio, Inc.
Alluxio Online Community Office Hours
Jan 28, 2020
Speakers:
Bin Fan, Alluxio
Calvin Jia, Alluxio
Alluxio 2.0 release was the biggest update since the birth of the project “Tachyon” from UC Berkley’s AmpLab. Gathering feedback from our Open Source Community and enterprise users, Alluxio 2.0 expands the system in three major directions including improving the operability of the system, having more advanced data management, as well as re-architecting the system to be able to scale to 1 billion + file. The system is now cloud native on AWS, Google Cloud, and allow users to enable native deployment with K8s. The new advanced data management enables data migration and replication from diff storage systems.
In this office hour, we introduce what’s new in the Alluxio 2 release, and dive deeper in each major direction the system has expanded on.
In this Office Hour, we will go over:
- Introduction and motivation of focus areas of Alluxio 2
- Overview of cloud native deployment methods
- New data management features
- System scalability improvements
Apache Tajo on Swift: Bringing SQL to the OpenStack WorldJihoon Son
This slide was presented at the SK Telecom T Developer Forum. It contains the brief evaluation results of the query execution performance of Tajo on Swift.
I conducted two kinds of experiments; The first experiment was to compare the performance of Tajo with on another distributed storage, i.e., HDFS. And the second experiment was the scalability test of Swift.
Interestingly, the scan performance on Swift is slower more than two times than that on HDFS. In addition, the task scheduling time on Swift is much greater than that on HDFS, which means the query initialization cost is very high.
Alluxio+Presto: An Architecture for Fast SQL in the CloudAlluxio, Inc.
Alluxio is a virtual distributed file system that serves as a data access layer between applications and storage systems. It provides a unified interface, improved performance through caching, and enables transparent migration between storage systems. Alluxio deployed with Presto on cloud storage like S3 can provide 5x faster query performance through caching query data in Alluxio workers located with compute. Case studies show how Alluxio improved response times for analytics workloads at large companies by eliminating remote data access and enabling data locality.
Exploring Alluxio for Daily Tasks at RobinhoodAlluxio, Inc.
This document discusses Robinhood's use of Alluxio to improve the performance of their data analytics workflows. It describes Robinhood's data lake architecture and daily traffic patterns, including ad-hoc visualizations queries, data analysis jobs, and report generations. The document notes limitations with their previous approach of reading directly from S3, including slow and unstable reads. It then outlines how Alluxio helps by caching frequently used data to improve read speeds by 30-50% and reduce total data scanned. Technical challenges of reading cold data and handling large schemas and tables are also mentioned. Overall, Alluxio provided a 30% performance improvement for their data-intensive queries.
Netflix’s Big Data Platform team manages data warehouse in Amazon S3 with over 60 petabytes of data and writes hundreds of terabytes of data every day. With a data warehouse at this scale, it is a constant challenge to keep improving performance. This talk will focus on Iceberg, a new table metadata format that is designed for managing huge tables backed by S3 storage. Iceberg decreases job planning time from minutes to under a second, while also isolating reads from writes to guarantee jobs always use consistent table snapshots.
In this session, you'll learn:
• Some background about big data at Netflix
• Why Iceberg is needed and the drawbacks of the current tables used by Spark and Hive
• How Iceberg maintains table metadata to make queries fast and reliable
• The benefits of Iceberg's design and how it is changing the way Netflix manages its data warehouse
• How you can get started using Iceberg
Speaker
Ryan Blue, Software Engineer, Netflix
Alluxio-FUSE as a data access layer for DaskAlluxio, Inc.
This document discusses integrating Alluxio with Dask for processing large mass spectrometry imaging data. Alluxio is used as a distributed caching layer via its FUSE POSIX API to provide standardized access to datasets from Dask. This allows Dask to process data in parallel across compute nodes without needing to load full datasets into memory. Initial results found a 10x speedup when reading cached data from Alluxio versus directly from S3 storage each time.
Alluxio 2.0 & Near Real-time Big Data Platform w/ Spark & AlluxioAlluxio, Inc.
Alluxio Bay Area Meetup March 14th
Join the Alluxio Meetup group: https://www.meetup.com/Alluxio
Alluxio Community slack: https://www.alluxio.org/slack
Iceberg: a modern table format for big data (Ryan Blue & Parth Brahmbhatt, Netflix)
Presto Summit 2018 (https://www.starburstdata.com/technical-blog/presto-summit-2018-recap/)
Introduction to Apache Tajo: Data Warehouse for Big DataJihoon Son
Tajo can infer the schema of self-describing data formats like JSON, ORC, and Parquet at query execution time without needing to pre-define and store the schema separately. This allows Tajo to query nested, complex data without requiring tedious schema definition by the user. Tajo's support of self-describing formats simplifies the process of querying nested, hierarchical data from files like the JSON log example shown.
Let's Compare: A Benchmark review of InfluxDB and ElasticsearchInfluxData
In this webinar, Ivan K will compare the performance and features of InfluxDB and Elasticsearch for common time-series workloads, specifically looking at the rates of data ingestion, on-disk data compression, and query performance. Come hear about how Ivan conducted his tests to determine which time-series db would best fit your needs. We will reserve 15 minutes at the end of the talk for you to ask Ivan directly about his test processes and independent viewpoint.
In this talk I'll discuss how we can combine the power of PostgreSQL with TensorFlow to perform data analysis. By using the pl/python3 procedural language we can integrate machine learning libraries such as TensorFlow with PostgreSQL, opening the door for powerful data analytics combining SQL with AI. Typical use-cases might involve regression analysis to find relationships in an existing dataset and to predict results based on new inputs, or to analyse time series data and extrapolate future data taking into account general trends and seasonal variability whilst ignoring noise. Python is an ideal language for building custom systems to do this kind of work as it gives us access to a rich ecosystem of libraries such as Pandas and Numpy, in addition to TensorFlow itself.
Iceberg: A modern table format for big data (Strata NY 2018)Ryan Blue
Hive tables are an integral part of the big data ecosystem, but the simple directory-based design that made them ubiquitous is increasingly problematic. Netflix uses tables backed by S3 that, like other object stores, don’t fit this directory-based model: listings are much slower, renames are not atomic, and results are eventually consistent. Even tables in HDFS are problematic at scale, and reliable query behavior requires readers to acquire locks and wait.
Owen O’Malley and Ryan Blue offer an overview of Iceberg, a new open source project that defines a new table layout addresses the challenges of current Hive tables, with properties specifically designed for cloud object stores, such as S3. Iceberg is an Apache-licensed open source project. It specifies the portable table format and standardizes many important features, including:
* All reads use snapshot isolation without locking.
* No directory listings are required for query planning.
* Files can be added, removed, or replaced atomically.
* Full schema evolution supports changes in the table over time.
* Partitioning evolution enables changes to the physical layout without breaking existing queries.
* Data files are stored as Avro, ORC, or Parquet.
* Support for Spark, Pig, and Presto.
Presto: SQL-on-Anything. Netherlands Hadoop User Group MeetupWojciech Biela
Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes. Presto was designed and written from the ground up for interactive analytics and approaches the speed of commercial data warehouses while scaling to the size of organizations like Facebook. One key feature in Presto is the ability to query data where it lives via an uniform ANSI SQL interface. Presto’s connector architecture creates an abstraction layer for anything that can be represented in a columnar or row-like format, such as HDFS, Amazon S3, Azure Storage, NoSQL stores, relational databases, Kafka streams and even proprietary data stores. Furthermore, a single Presto query can combine data from multiple sources, allowing for analytics across an entire organization.
Presto best practices for Cluster admins, data engineers and analystsShubham Tagra
This document provides best practices for using Presto across three categories: cluster admins, data engineers, and end users. For admins, it recommends optimizing JVM size, setting concurrency limits, using spot instances to reduce costs, enabling data caching, and using resource groups for isolation. For data engineers, it suggests best practices for data storage like using columnar formats and statistics. For end users, tips include using deterministic filters, explaining queries, and addressing skew through techniques like broadcast joins.
Ultra-fast SQL Analytics using PAS (Presto on Alluxio Stack)Alluxio, Inc.
Presto Meetup @ Uber
Nov 21, 2019
Speakers:
Haoyuan (H.Y.) Li, Founder and CTO | Alluxio
Bin Fan, Founding engineer and VP of Open Source | Alluxio
For more Alluxio events: https://www.alluxio.io/events/
Rise of Intermediate APIs - Beam and Alluxio at Alluxio Meetup 2016Alluxio, Inc.
This document discusses the rise of intermediary APIs like Apache Beam and Alluxio that allow users to write data processing jobs and express storage lifecycles independently of physical constraints. Intermediary APIs provide portability across frameworks and unified access to multiple storage systems. Alluxio in particular provides an in-memory filesystem that can cache data from various storage sources, while Beam allows processing jobs to run on different execution engines. These intermediary APIs create a path for easy technology adoption and focus on features over connectivity.
Advancing GPU Analytics with RAPIDS Accelerator for Spark and AlluxioAlluxio, Inc.
This document discusses accelerating Apache Spark workloads using RAPIDS Accelerator for Spark and Alluxio. It provides an introduction to RAPIDS Accelerator for Spark, shows significant performance gains over CPU-only Spark, and discusses combining GPU acceleration with Alluxio for optimized performance and cost on cloud datasets. Configuration options for RAPIDS and Alluxio are also covered.
Using Ceph for Large Hadron Collider DataRob Gardner
Talk by Lincoln Bryant (University of Chicago ATLAS team) on using Ceph for ATLAS data analysis @ Ceph Days Chicago http://ceph.com/cephdays/ceph-day-chicago/
The Google File System was designed by Google to store and manage large files across thousands of commodity servers. It uses a single master to manage metadata and track file locations across chunkservers. Chunks are replicated for reliability and placed across racks to improve bandwidth utilization. The system provides high throughput for concurrent reads and writes through leases to maintain consistency and pipelining of data flows. Logs and replication are used to provide fault tolerance against server failures.
Streaming millions of Contact Center interactions in (near) real-time with Pu...Frank Kelly
Cogito uses Apache Pulsar to stream millions of contact center interactions in near real-time. They break each customer call into intervals, with each interval generating two Pulsar topics for real-time audio and analytics. With 15,000 concurrent users, they estimate creating 1.5-2 million topics per day. Key challenges include Zookeeper memory usage, broker configuration tuning, and bookie throughput optimization to handle their high throughput workload.
Hybrid data lake on google cloud with alluxio and dataprocAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
Hybrid Data Lake on Google Cloud with Alluxio and Dataproc
Roderick Yao, Strategic Cloud Engineer (Google Cloud)
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
How to Develop and Operate Cloud First Data PlatformsAlluxio, Inc.
Alluxio Online Meetup
Feb 11, 2020
Speakers:
Du Li, Electronic Arts
Bin Fan, Alluxio
In cloud-based software stacks, there are varying degrees of automation across different layers: infrastructure, platform, and application. The mismatch in automation often breaks balance in devops, causing ops nightmares in platforms and applications. This talk will overview two projects at Electronic Arts (EA) that address the mismatch by data orchestration: One project automatically generates configurations for all components in a large monitoring system, which reduces the daily average number of alerts from ~1000 to ~20. The other project introduces Alluxio for caching and unifying address space across ETL and analytics workloads, which substantially simplifies architecture, improves performance, and reduces ops overheads.
What’s new in Alluxio 2: from seamless operations to structured data managementAlluxio, Inc.
Alluxio Online Community Office Hours
Jan 28, 2020
Speakers:
Bin Fan, Alluxio
Calvin Jia, Alluxio
Alluxio 2.0 release was the biggest update since the birth of the project “Tachyon” from UC Berkley’s AmpLab. Gathering feedback from our Open Source Community and enterprise users, Alluxio 2.0 expands the system in three major directions including improving the operability of the system, having more advanced data management, as well as re-architecting the system to be able to scale to 1 billion + file. The system is now cloud native on AWS, Google Cloud, and allow users to enable native deployment with K8s. The new advanced data management enables data migration and replication from diff storage systems.
In this office hour, we introduce what’s new in the Alluxio 2 release, and dive deeper in each major direction the system has expanded on.
In this Office Hour, we will go over:
- Introduction and motivation of focus areas of Alluxio 2
- Overview of cloud native deployment methods
- New data management features
- System scalability improvements
Apache Tajo on Swift: Bringing SQL to the OpenStack WorldJihoon Son
This slide was presented at the SK Telecom T Developer Forum. It contains the brief evaluation results of the query execution performance of Tajo on Swift.
I conducted two kinds of experiments; The first experiment was to compare the performance of Tajo with on another distributed storage, i.e., HDFS. And the second experiment was the scalability test of Swift.
Interestingly, the scan performance on Swift is slower more than two times than that on HDFS. In addition, the task scheduling time on Swift is much greater than that on HDFS, which means the query initialization cost is very high.
Alluxio+Presto: An Architecture for Fast SQL in the CloudAlluxio, Inc.
Alluxio is a virtual distributed file system that serves as a data access layer between applications and storage systems. It provides a unified interface, improved performance through caching, and enables transparent migration between storage systems. Alluxio deployed with Presto on cloud storage like S3 can provide 5x faster query performance through caching query data in Alluxio workers located with compute. Case studies show how Alluxio improved response times for analytics workloads at large companies by eliminating remote data access and enabling data locality.
Exploring Alluxio for Daily Tasks at RobinhoodAlluxio, Inc.
This document discusses Robinhood's use of Alluxio to improve the performance of their data analytics workflows. It describes Robinhood's data lake architecture and daily traffic patterns, including ad-hoc visualizations queries, data analysis jobs, and report generations. The document notes limitations with their previous approach of reading directly from S3, including slow and unstable reads. It then outlines how Alluxio helps by caching frequently used data to improve read speeds by 30-50% and reduce total data scanned. Technical challenges of reading cold data and handling large schemas and tables are also mentioned. Overall, Alluxio provided a 30% performance improvement for their data-intensive queries.
Netflix’s Big Data Platform team manages data warehouse in Amazon S3 with over 60 petabytes of data and writes hundreds of terabytes of data every day. With a data warehouse at this scale, it is a constant challenge to keep improving performance. This talk will focus on Iceberg, a new table metadata format that is designed for managing huge tables backed by S3 storage. Iceberg decreases job planning time from minutes to under a second, while also isolating reads from writes to guarantee jobs always use consistent table snapshots.
In this session, you'll learn:
• Some background about big data at Netflix
• Why Iceberg is needed and the drawbacks of the current tables used by Spark and Hive
• How Iceberg maintains table metadata to make queries fast and reliable
• The benefits of Iceberg's design and how it is changing the way Netflix manages its data warehouse
• How you can get started using Iceberg
Speaker
Ryan Blue, Software Engineer, Netflix
Alluxio-FUSE as a data access layer for DaskAlluxio, Inc.
This document discusses integrating Alluxio with Dask for processing large mass spectrometry imaging data. Alluxio is used as a distributed caching layer via its FUSE POSIX API to provide standardized access to datasets from Dask. This allows Dask to process data in parallel across compute nodes without needing to load full datasets into memory. Initial results found a 10x speedup when reading cached data from Alluxio versus directly from S3 storage each time.
Alluxio 2.0 & Near Real-time Big Data Platform w/ Spark & AlluxioAlluxio, Inc.
Alluxio Bay Area Meetup March 14th
Join the Alluxio Meetup group: https://www.meetup.com/Alluxio
Alluxio Community slack: https://www.alluxio.org/slack
Iceberg: a modern table format for big data (Ryan Blue & Parth Brahmbhatt, Netflix)
Presto Summit 2018 (https://www.starburstdata.com/technical-blog/presto-summit-2018-recap/)
Introduction to Apache Tajo: Data Warehouse for Big DataJihoon Son
Tajo can infer the schema of self-describing data formats like JSON, ORC, and Parquet at query execution time without needing to pre-define and store the schema separately. This allows Tajo to query nested, complex data without requiring tedious schema definition by the user. Tajo's support of self-describing formats simplifies the process of querying nested, hierarchical data from files like the JSON log example shown.
Let's Compare: A Benchmark review of InfluxDB and ElasticsearchInfluxData
In this webinar, Ivan K will compare the performance and features of InfluxDB and Elasticsearch for common time-series workloads, specifically looking at the rates of data ingestion, on-disk data compression, and query performance. Come hear about how Ivan conducted his tests to determine which time-series db would best fit your needs. We will reserve 15 minutes at the end of the talk for you to ask Ivan directly about his test processes and independent viewpoint.
In this talk I'll discuss how we can combine the power of PostgreSQL with TensorFlow to perform data analysis. By using the pl/python3 procedural language we can integrate machine learning libraries such as TensorFlow with PostgreSQL, opening the door for powerful data analytics combining SQL with AI. Typical use-cases might involve regression analysis to find relationships in an existing dataset and to predict results based on new inputs, or to analyse time series data and extrapolate future data taking into account general trends and seasonal variability whilst ignoring noise. Python is an ideal language for building custom systems to do this kind of work as it gives us access to a rich ecosystem of libraries such as Pandas and Numpy, in addition to TensorFlow itself.
Iceberg: A modern table format for big data (Strata NY 2018)Ryan Blue
Hive tables are an integral part of the big data ecosystem, but the simple directory-based design that made them ubiquitous is increasingly problematic. Netflix uses tables backed by S3 that, like other object stores, don’t fit this directory-based model: listings are much slower, renames are not atomic, and results are eventually consistent. Even tables in HDFS are problematic at scale, and reliable query behavior requires readers to acquire locks and wait.
Owen O’Malley and Ryan Blue offer an overview of Iceberg, a new open source project that defines a new table layout addresses the challenges of current Hive tables, with properties specifically designed for cloud object stores, such as S3. Iceberg is an Apache-licensed open source project. It specifies the portable table format and standardizes many important features, including:
* All reads use snapshot isolation without locking.
* No directory listings are required for query planning.
* Files can be added, removed, or replaced atomically.
* Full schema evolution supports changes in the table over time.
* Partitioning evolution enables changes to the physical layout without breaking existing queries.
* Data files are stored as Avro, ORC, or Parquet.
* Support for Spark, Pig, and Presto.
Presto: SQL-on-Anything. Netherlands Hadoop User Group MeetupWojciech Biela
Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes. Presto was designed and written from the ground up for interactive analytics and approaches the speed of commercial data warehouses while scaling to the size of organizations like Facebook. One key feature in Presto is the ability to query data where it lives via an uniform ANSI SQL interface. Presto’s connector architecture creates an abstraction layer for anything that can be represented in a columnar or row-like format, such as HDFS, Amazon S3, Azure Storage, NoSQL stores, relational databases, Kafka streams and even proprietary data stores. Furthermore, a single Presto query can combine data from multiple sources, allowing for analytics across an entire organization.
Presto best practices for Cluster admins, data engineers and analystsShubham Tagra
This document provides best practices for using Presto across three categories: cluster admins, data engineers, and end users. For admins, it recommends optimizing JVM size, setting concurrency limits, using spot instances to reduce costs, enabling data caching, and using resource groups for isolation. For data engineers, it suggests best practices for data storage like using columnar formats and statistics. For end users, tips include using deterministic filters, explaining queries, and addressing skew through techniques like broadcast joins.
Ultra-fast SQL Analytics using PAS (Presto on Alluxio Stack)Alluxio, Inc.
Presto Meetup @ Uber
Nov 21, 2019
Speakers:
Haoyuan (H.Y.) Li, Founder and CTO | Alluxio
Bin Fan, Founding engineer and VP of Open Source | Alluxio
For more Alluxio events: https://www.alluxio.io/events/
Rise of Intermediate APIs - Beam and Alluxio at Alluxio Meetup 2016Alluxio, Inc.
This document discusses the rise of intermediary APIs like Apache Beam and Alluxio that allow users to write data processing jobs and express storage lifecycles independently of physical constraints. Intermediary APIs provide portability across frameworks and unified access to multiple storage systems. Alluxio in particular provides an in-memory filesystem that can cache data from various storage sources, while Beam allows processing jobs to run on different execution engines. These intermediary APIs create a path for easy technology adoption and focus on features over connectivity.
Advancing GPU Analytics with RAPIDS Accelerator for Spark and AlluxioAlluxio, Inc.
This document discusses accelerating Apache Spark workloads using RAPIDS Accelerator for Spark and Alluxio. It provides an introduction to RAPIDS Accelerator for Spark, shows significant performance gains over CPU-only Spark, and discusses combining GPU acceleration with Alluxio for optimized performance and cost on cloud datasets. Configuration options for RAPIDS and Alluxio are also covered.
Using Ceph for Large Hadron Collider DataRob Gardner
Talk by Lincoln Bryant (University of Chicago ATLAS team) on using Ceph for ATLAS data analysis @ Ceph Days Chicago http://ceph.com/cephdays/ceph-day-chicago/
The Google File System was designed by Google to store and manage large files across thousands of commodity servers. It uses a single master to manage metadata and track file locations across chunkservers. Chunks are replicated for reliability and placed across racks to improve bandwidth utilization. The system provides high throughput for concurrent reads and writes through leases to maintain consistency and pipelining of data flows. Logs and replication are used to provide fault tolerance against server failures.
Streaming millions of Contact Center interactions in (near) real-time with Pu...Frank Kelly
Cogito uses Apache Pulsar to stream millions of contact center interactions in near real-time. They break each customer call into intervals, with each interval generating two Pulsar topics for real-time audio and analytics. With 15,000 concurrent users, they estimate creating 1.5-2 million topics per day. Key challenges include Zookeeper memory usage, broker configuration tuning, and bookie throughput optimization to handle their high throughput workload.
Streaming Millions of Contact Center Interactions in (Near) Real-Time with Pu...StreamNative
Cogito uses Apache Pulsar to stream millions of contact center interactions in near real-time. They break each customer call into intervals, with each interval generating two Pulsar topics for real-time audio and analytics. With 15,000 concurrent users, they estimate creating 1.5-2 million topics per day. Key challenges include Zookeeper memory usage, broker configuration tuning, and bookie throughput optimization to handle their high throughput workload.
[Paper reading] Interleaving with Coroutines: A Practical Approach for Robust...PingCAP
This paper proposes interleaving with coroutines for
any type of index join. It showcases the proposal on SAP
HANA by implementing binary search and CSB+-tree traversal for an instance of index join related to dictionary compression. Coroutine implementations not only perform similarly to prior interleaving techniques, but also resemble the original code closely, while supporting both interleaved and non-interleaved execution. Thus, this paper claims that coroutines
make interleaving practical for use in real DBMS codebases.
Paper: http://www.vldb.org/pvldb/vol11/p230-psaropoulos.pdf
Follow PingCAP on Twitter: https://twitter.com/PingCAP
Follow PingCAP on LinkedIn: https://www.linkedin.com/company/13205484/
Keystone Data Pipeline manages several thousand Flink pipelines, with variable workloads. These pipelines are simple routers which consume from Kafka and write to one of three sinks. In order to alleviate our operational overhead, we’ve implemented autoscaling for our routers. Autoscaling has reduced our resource usage by 25% - 45% (varying by region and time), and has reduced our on call burden. This talk will take an in depth look at the mathematics, algorithms, and infrastructure details for implementing autoscaling of simple pipelines at scale. It will also discuss future work for autoscaling complex pipelines.
Enabling Presto to handle massive scale at lightning speedShubham Tagra
Presto User Group Singapore Meetup - March 2019.
These slides talk through the current state of Presto and features that help Presto work better in cloud and a glimpse into the roadmap
Another year, another talk about OpenTSDB running on HBase.
We'll discuss topics like:
Yahoo's append co-processor saving CPU resources by resolving atomic appends at compaction or query time.
The pros and cons of HBASE-15181, Date Tiered compaction for time series data.
Yahoo's experiments with unbounded secondary index on HBase.
OpenTSDB's 3.0 featuring a new query engine and API.
by Chris Larsen of Yahoo!
Cache Optimization Techniques for General Purpose Graphic Processing UnitsVajira Thambawita
This document summarizes research on adapting CPU cache optimization techniques for general purpose graphic processing units (GPGPUs). It first discusses related work on CPU and GPGPU cache architectures and optimization techniques. It then presents the conceptual design of selecting CPU techniques and analyzing their adaptation to GPGPUs. Two common CPU techniques, stride-one access and blocking, are adapted and experimental results show their effectiveness on a GPGPU, with blocking providing better performance than non-blocking approaches. The research contributes techniques for programmers to optimize GPGPU cache performance.
Scaling Monitoring At Databricks From Prometheus to M3LibbySchulze
M3 has been successfully deployed at Databricks to replace their Prometheus monitoring system. Some key lessons learned include monitoring important M3 metrics like memory and disk usage, having automated deployment processes, and planning for capacity needs and spikes in metrics. Updates to M3 have gone smoothly, and future plans include using new M3 features like downsampling and separate namespaces.
MongoDB World 2019: Packing Up Your Data and Moving to MongoDB AtlasMongoDB
Moving to a new home is daunting. Packing up all your things, getting a vehicle to move it all, unpacking it, updating your mailing address, and making sure you did not leave anything behind. Well, the move to MongoDB Atlas is similar, but all the logistics are already figured out for you by MongoDB.
Memory system, and not processor speed, is often the bottleneck for many applications.
Memory system performance is largely captured by two parameters, latency and bandwidth.
Latency is the time from the issue of a memory request to the time the data is available at the processor.
Bandwidth is the rate at which data can be pumped to the processor by the memory system.
This document summarizes a project report on optimizing fracking simulations for GPU acceleration. The simulations model hydraulic fracturing and consist of three phases. The focus was on the second phase, which calculates interaction factors and stresses between grid cells and takes 80% of the CPU execution time. This phase was implemented on a GPU using techniques like finding parallelism at the cell and grid level, optimizing data transfers, memory access, and using streams to execute cells concurrently. These optimizations led to speedups of up to 56x compared to the CPU implementation.
The document summarizes a Kaggle competition to forecast web traffic for Wikipedia articles. It discusses the goal of forecasting traffic for 145,000 articles, the evaluation metric used, an overview of the winner's solution using recurrent neural networks, and lessons learned. Key points include that the winner used a sequence-to-sequence model with GRU units to capture local and global patterns in the time series data, and employed techniques like model averaging to reduce variance.
Refactoring Applications for the XK7 and Future Hybrid ArchitecturesJeff Larkin
This document outlines a tutorial on refactoring applications for future hybrid architectures like the Cray XK7. The tutorial will cover analyzing application code to identify parallelization opportunities using OpenMP, OpenACC, and CUDA/Cuda Fortran. It will also discuss the architecture of the XK7 node, which features an AMD CPU and Nvidia GPU, and developing a strategy for refactoring the application to perform well on today's and tomorrow's systems. Attendees will learn how to optimize their applications for hybrid CPU/GPU architectures through tools-assisted refactoring.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Strata+Hadoop 2017 San Jose: Lessons from a year of supporting Apache Kafkaconfluent
The number of deployments of Apache Kafka at enterprise scale has greatly increased in the years since Kafka’s original development in 2010. Along with this rapid growth has come a wide variety of use cases and deployment strategies that transcend what Kafka’s creators imagined when they originally developed the technology. As the scope and reach of streaming data platforms based on Apache Kafka has grown, the need to understand monitoring and troubleshooting strategies has as well.
Dustin Cote and Ryan Pridgeon share their experience supporting Apache Kafka at enterprise-scale and explore monitoring and troubleshooting techniques to help you avoid pitfalls when scaling large-scale Kafka deployments.
Topics include:
- Effective use of JMX for Kafka
- Tools for preventing small problems from becoming big ones
- Efficient architectures proven in the wild
- Finding and storing the right information when it all goes wrong
Visit www.confluent.io for more information.
Pain points with M3, some things to address them and how replication worksRob Skillington
This document discusses some pain points with running the M3 time series database in a clustered configuration for metrics collection. It notes that clustering takes significant work and learning. Key challenges include long bootstrap times for large instances, difficulties controlling and monitoring memory usage, and harder implementation of downsampling in a central model versus a sidecar model. The document provides updates on work being done to address these issues through improvements to bootstrapping, memory management, and downsampling support.
AI/ML Infra Meetup | ML explainability in MichelangeloAlluxio, Inc.
AI/ML Infra Meetup
May. 23, 2024
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Eric Wang (Software Engineer, @Uber)
Uber has numerous deep learning models, most of which are highly complex with many layers and a vast number of features. Understanding how these models work is challenging and demands significant resources to experiment with various training algorithms and feature sets. With ML explainability, the ML team aims to bring transparency to these models, helping to clarify their predictions and behavior. This transparency also assists the operations and legal teams in explaining the reasons behind specific prediction outcomes.
In this talk, Eric Wang will discuss the methods Uber used for explaining deep learning models and how we integrated these methods into the Uber AI Michelangelo ecosystem to support offline explaining.
AI/ML Infra Meetup | Reducing Prefill for LLM Serving in RAGAlluxio, Inc.
AI/ML Infra Meetup
May. 23, 2024
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Junchen Jiang (Assistant Professor of Computer Science, @University of Chicago)
Prefill in LLM inference is known to be resource-intensive, especially for long LLM inputs. While better scheduling can mitigate prefill’s impact, it would be fundamentally better to avoid (most of) prefill. This talk introduces our preliminary effort towards drastically minimizing prefill delay for LLM inputs that naturally reuse text chunks, such as in retrieval-augmented generation. While keeping the KV cache of all text chunks in memory is difficult, we show that it is possible to store them on cheaper yet slower storage. By improving the loading process of the reused KV caches, we can still significantly speed up prefill delay while maintaining the same generation quality.
AI/ML Infra Meetup | Perspective on Deep Learning FrameworkAlluxio, Inc.
AI/ML Infra Meetup
May. 23, 2024
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Triston Cao (Senior Deep Learning Software Engineering Manager, @NVIDIA)
From Caffe to MXNet, to PyTorch, and more, Xiande Cao, Senior Deep Learning Software Engineer Manager, will share his perspective on the evolution of deep learning frameworks.
AI/ML Infra Meetup | Improve Speed and GPU Utilization for Model Training & S...Alluxio, Inc.
AI/ML Infra Meetup
May. 23, 2024
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Lu Qiu (Data & AI Platform Tech Lead, @Alluxio)
- Siyuan Sheng (Senior Software Engineer, @Alluxio)
Speed and efficiency are two requirements for the underlying infrastructure for machine learning model development. Data access can bottleneck end-to-end machine learning pipelines as training data volume grows and when large model files are more commonly used for serving. For instance, data loading can constitute nearly 80% of the total model training time, resulting in less than 30% GPU utilization. Also, loading large model files for deployment to production can be slow because of slow network or storage read operations. These challenges are prevalent when using popular frameworks like PyTorch, Ray, or HuggingFace, paired with cloud object storage solutions like S3 or GCS, or downloading models from the HuggingFace model hub.
In this presentation, Lu and Siyuan will offer comprehensive insights into improving speed and GPU utilization for model training and serving. You will learn:
- The data loading challenges hindering GPU utilization
- The reference architecture for running PyTorch and Ray jobs while reading data from S3, with benchmark results of training ResNet50 and BERT
- Real-world examples of boosting model performance and GPU utilization through optimized data access
Alluxio Monthly Webinar | Simplify Data Access for AI in Multi-CloudAlluxio, Inc.
Alluxio Monthly Webinar
May. 14, 2024
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- ChanChan Mao (Developer Advocate, Alluxio)
- Bin Fan (VP of Technology, Alluxio)
Running AI/ML workloads in different clouds present unique challenges. The key to a manageable multi-cloud architecture is the ability to seamlessly access data across environments with high performance and low cost.
This webinar is designed for data platform engineers, data infra engineers, data engineers, and ML engineers who work with multiple data sources in hybrid or multi-cloud environments. Chanchan and Bin will guide the audience through using Alluxio to greatly simplify data access and make model training and serving more efficient in these environments.
You will learn:
- How to access data in multi-region, hybrid, and multi-cloud like accessing a local file system
- How to run PyTorch to read datasets and write checkpoints to remote storage with Alluxio as the distributed data access layer
- Real-world examples and insights from tech giants like Uber, AliPay and more
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio, Inc.
Alluxio Monthly Webinar
Apr. 23, 2024
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- ChanChan Mao (Developer Advocate, Alluxio)
- Shawn Sun (Tech Lead of Cloud Native, Alluxio)
Cloud-native model training jobs require fast data access to achieve shorter training cycles. Accessing data can be challenging when your datasets are distributed across different regions and clouds. Additionally, as GPUs remain scarce and expensive resources, it becomes more common to set up remote training clusters from where data resides. This multi-region/cloud scenario introduces the challenges of losing data locality, resulting in operational overhead, latency and expensive cloud costs.
In the third webinar of the multi-cloud webinar series, Chanchan and Shawn dive deep into:
- The data locality challenges in the multi-region/cloud ML pipeline
- Using a cloud-native distributed caching system to overcome these challenges
- The architecture and integration of PyTorch/Ray+Alluxio+S3 using POSIX or RESTful APIs
- Live demo with ResNet and BERT benchmark results showing performance gains and cost savings analysis
Optimizing Data Access for Analytics And AI with AlluxioAlluxio, Inc.
Alluxio x Tobiko - ETL Happy Hour
April 16, 2024
For more Alluxio events: https://alluxio.io/events/
Speaker:
Lucy Ge (Staff Software Engineer @ Alluxio)
In this presentation, Lucy Ge will discuss the data access challenges in the data pipeline and how to optimize the speed and costs of analytics and AI workloads.
Speed Up Presto at Uber with Alluxio CachingAlluxio, Inc.
Alluxio x Tobiko - ETL Happy Hour
April 16, 2024
For more Alluxio events: https://alluxio.io/events/
Speaker:
Chen Liang (Staff Software Engineer @ Uber)
In this presentation, Chen Liang will share the design and implementation of the Alluxio-Presto local cache to reduce query latency.
Correctly Loading Incremental Data at ScaleAlluxio, Inc.
Alluxio x Tobiko - ETL Happy Hour
April 16, 2024
For more Alluxio events: https://alluxio.io/events/
Speaker:
Toby Mao (CTO @ Tobiko Data)
Writing efficient and correct incremental pipelines is challenging. Data practitioners who take on this challenge are viewed as performing an "advanced" function, which discourages broader teams from adopting incremental loads. In this lightning talk, CTO of Tobiko Data, Toby Mao, will demystify incremental loading data at scale.
Big Data Bellevue Meetup | Enhancing Python Data Loading in the Cloud for AI/MLAlluxio, Inc.
Big Data Bellevue Meetup
March 21, 2024
For more Alluxio events: https://alluxio.io/events/
Speakers:
Bin Fan (VP of Open Source, Alluxio)
In this presentation, Bin Fan (VP of Open Source @ Alluxio) will address a critical challenge of optimizing data loading for distributed Python applications within AI/ML workloads in the cloud, focusing on popular frameworks like Ray and Hugging Face. Integration of Alluxio’s distributed caching for Python applications is accomplished using the fsspec interface, thus greatly improving data access speeds. This is particularly useful in machine learning workflows, where repeated data reloading across slow, unstable or congested networks can severely affect GPU efficiency and escalate operational costs.
Attendees can look forward to practical, hands-on demonstrations showcasing the tangible benefits of Alluxio’s caching mechanism across various real-world scenarios. These demos will highlight the enhancements in data efficiency and overall performance of data-intensive Python applications. This presentation is tailored for developers and data scientists eager to optimize their AI/ML workloads. Discover strategies to accelerate your data processing tasks, making them not only faster but also more cost-efficient.
Alluxio Monthly Webinar | Why a Multi-Cloud Strategy Matters for Your AI Plat...Alluxio, Inc.
Alluxio Monthly Webinar
Feb. 27, 2024
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Tarik Bennett (Senior Solutions Engineer, Alluxio)
As GenAI and AI continue to transform businesses, scaling these workloads requires optimized underlying infrastructure. A multi-cloud architecture allows organizations to leverage different cloud services to meet diverse workload demands while maximizing efficiency, reducing costs, and avoiding vendor lock-in. However, achieving a multi-cloud vision can be challenging.
In this webinar, Tarik will share how an agonistic data layer, like Alluxio, allows you to embrace the separation of storage from compute and simplify the adoption of multi-cloud for AI.
- Learn why leveraging multiple cloud providers is critical for balancing performance, scalability, and cost of your AI platform
- Discover how an agnostic data layer like Alluxio provides seamless data access in multi-cloud that bridges storage and compute without data replication
- Gain insights into real-world examples and best practices for deploying AI across on-prem, hybrid, and multi-cloud environments
Alluxio Monthly Webinar | Five Disruptive Trends that Every Data & AI Leader...Alluxio, Inc.
Alluxio Monthly Webinar
Jan. 30, 2024
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Kevin Petrie (VP of Research, Eckerson Group)
- Omid Razavi (SVP of Customer Success, Alluxio)
2024 is gearing up to be an impactful year for AI and analytics. Join us on January 30, as Kevin Petrie (VP of Research at Eckerson Group) and Omid Razavi (SVP of Customer Success at Alluxio) share key trends that data and AI leaders should know. This event will efficiently guide you with market data and expert insights to drive successful business outcomes.
- Assess current and future trends in data and AI with industry experts
- Discover valuable insights and practical recommendations
- Learn best practices to make your enterprise data more accessible for both analytics and AI applications
Data Infra Meetup | FIFO Queues are All You Need for Cache EvictionAlluxio, Inc.
Data Infra Meetup
Jan. 25, 2024
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Juncheng Yang(Ph.D Candidate, @CMU)
As a cache eviction algorithm, FIFO has a lot of attractive properties, such as simplicity, speed, scalability, and flash-friendliness. The most prominent criticism of FIFO is its low efficiency (high miss ratio). In this talk, I will describe a simple, scalable FIFO-based algorithm with three static queues (S3-FIFO). Evaluated on 6594 cache traces from 14 datasets, we show that S3- FIFO has lower miss ratios than state-of-the-art algorithms across traces. Moreover, S3-FIFO’s efficiency is robust — it has the lowest mean miss ratio on 10 of the 14 datasets. FIFO queues enable S3-FIFO to achieve good scalability with 6× higher throughput compared to optimized LRU at 16 threads. Our insight is that most objects in skewed workloads will only be accessed once in a short window, so it is critical to evict them early (also called quick demotion). The key of S3-FIFO is a small FIFO queue that filters out most objects from entering the main cache, which provides a guaranteed demotion speed and high demotion precision.
Data Infra Meetup | Accelerate Your Trino/Presto Queries - Gain the Alluxio EdgeAlluxio, Inc.
Data Infra Meetup
Jan. 25, 2024
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Jingwen Ouyang (Product Manager, @Alluxio)
In this session, Jingwen presents an overview of using Alluxio Edge caching to accelerate Trino or Presto queries. She offers practical best practices for using distributed caching with compute engines. In addition, this session also features insights from real-world examples.
Data Infra Meetup | Accelerate Distributed PyTorch/Ray Workloads in the CloudAlluxio, Inc.
Data Infra Meetup
Jan. 25, 2024
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Siyuan Sheng (Senior Software Engineer, @Alluxio)
- Chunxu Tang (Research Scientist, @Alluxio)
In this session, cloud optimization specialists Chunxu and Siyuan break down the challenges and present a fresh architecture designed to optimize I/O across the data pipeline, ensuring GPUs function at peak performance. The integrated solution of PyTorch/Ray + Alluxio + S3 offers a promising way forward, and the speakers delve deep into its practical applications. Attendees will not only gain theoretical insights but will also be treated to hands-on instructions and demonstrations of deploying this cutting-edge architecture in Kubernetes, specifically tailored for Tensorflow/PyTorch/Ray workloads in the public cloud.
Data Infra Meetup | ByteDance's Native Parquet ReaderAlluxio, Inc.
Data Infra Meetup
Jan. 25, 2024
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Shengxuan Liu (Software Engineer, @ByteDance)
Shengxuan Liu from ByteDance presents the new ByteDance’s native Parquet Reader. The talk covers the architecture and key features of the Reader, and how the new Reader is able to facilitate data processing efficiency.
Data Infra Meetup | Uber's Data Storage EvolutionAlluxio, Inc.
Data Infra Meetup
Jan. 25, 2024
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Jing Zhao (Principal Engineer, @Uber)
Uber builds one of the biggest data lakes in the industry, which stores exabytes of data. In this talk, we will introduce the evolution of our data storage architecture, and delve into multiple key initiatives during the past several years.
Specifically, we will introduce:
- Our on-prem HDFS cluster scalability challenges and how we solved them
- Our efficiency optimizations that significantly reduced the storage overhead and unit cost without compromising reliability and performance
- The challenges we are facing during the ongoing Cloud migration and our solutions
Alluxio Monthly Webinar | Why NFS/NAS on Object Storage May Not Solve Your AI...Alluxio, Inc.
Alluxio Monthly Webinar
Nov. 15, 2023
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Tarik Bennett (Senior Solutions Engineer)
- Beinan Wang (Senior Staff Engineer & Architect)
Many companies are working with development architectures for AI platforms but have concerns about efficiency at scale as data volumes increase. They use centralized cloud data lakes, like S3, to store training data for AI platforms. However, GPU shortages add more complications. Storage and compute can be separate, or even remote, making data loading slow and expensive:
1) Optimizing a developmental setup can include manual copies, which are slow and error-prone
2) Directly transferring data across regions or from cloud to on-premises can incur expensive egress fees
This webinar covers solutions to improve data loading for model training. You will learn:
- The data loading challenges with distributed infrastructure
- Typical solutions, including NFS/NAS on object storage, and why they are not the best options
- Common architectures that can improve data loading and cost efficiency
- Using Alluxio to accelerate model training and reduce costs
AI Infra Day | Accelerate Your Model Training and Serving with Distributed Ca...Alluxio, Inc.
AI Infra Day
Oct. 25, 2023
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Adit Madan (Director of Product Management, @Alluxio)
In this session, Adit Madan, Director of Product Management at Alluxio, presents an overview of using distributed caching to accelerate model training and serving. He explores the requirements of data access patterns in the ML pipeline and offers practical best practices for using distributed caching in the cloud. This session features insights from real-world examples, such as AliPay, Zhihu, and more.
AI Infra Day | The AI Infra in the Generative AI EraAlluxio, Inc.
AI Infra Day
Oct. 25, 2023
Organized by Alluxio
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Bin Fan (Cheif Architect, VP of Open Source, @Alluxio)
As the AI landscape rapidly evolves, the advancements in generative AI technologies, such as ChatGPT, are driving a need for a robust AI infra stack. This opening keynote will explore the key trends of the AI infra stack in the generative AI era.
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
What is Master Data Management by PiLog Groupaymanquadri279
PiLog Group's Master Data Record Manager (MDRM) is a sophisticated enterprise solution designed to ensure data accuracy, consistency, and governance across various business functions. MDRM integrates advanced data management technologies to cleanse, classify, and standardize master data, thereby enhancing data quality and operational efficiency.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Drona Infotech is a premier mobile app development company in Noida, providing cutting-edge solutions for businesses.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Transform Your Communication with Cloud-Based IVR Solutions
Improve Presto Architectural Decisions with Shadow Cache
1. Improve Presto architectural decisions with
shadow cache
Zhenyu Song (Princeton University)
Ke Wang (Facebook)
October 12, 2021
2. Introduction
2
● Zhenyu Song
● Ph.D. Candidate at Princeton
University
● Interested on caching system
● Ke Wang
● Engineer in facebook
● Focus on low latency queries in
presto team
3. Motivation: cache operation decisions
Shadow cache: a lightweight Alluxio component to
track the working set size & infinite cache hit ratio
3
Cache operator
How to size my cache for each tenant?
What is the potential hit ratio improvement?
4. Motivation: cache operation decisions
4
Cache operator
How to size my cache for
each tenant?
What is the potential hit ratio
improvement?
Shadow cache
Total unique bytes (pages)
accessed in the past 24 h
Total #hit/miss if the cache can
hold all 24h requested pages
5. Shadow cache design challenges
● Goal: track the working set size & infinite size hit ratio
● Challenges:
● Small memory & CPU overhead
● Accurate
● Dynamic update
5
6. Solution to overhead & accuracy challenge: Bloom filter
6
● Space-efficient probabilistic data structure membership testing
● Intuition: each object is represented with only several bits
● Possibly false positive, but not false negative
● It has k hash functions
○ To add an element, apply each hash function and set the bit to 1
○ To query an element, apply each hash function and AND the bits.
7. Why Bloom filter helps?
7
● To get infinite size hit ratio, we can query each get(key) to know
whether the key is in the Bloom filter.
● To measure the working set size, we leverage the approximation
Where is an estimate of the number of items in the filter, m is the
length (size) of the filter, k is the number of hash functions, and X is
the number of bits set to one.
8. Solution to dynamic update: Bloom filter chain
8
Bloom
filter
Bloom
filter
Bloom
filter
Bloom
filter
● The shadow cache is implemented by a chain of Bloom filters.
Each one tracks the unique objects in one period
6h 6h 6h 6h
12. Bloom filter chain: estimate_working_set_size()
12
Bloom
filter
Bloom
filter
Bloom
filter
Bloom
filter
t
OR all bits
Bloom
filter
13. Memory overhead estimation
● Example: track 27 M pages (27 TB working set size) uses 125 MB memory,
with only 3% error
○ Assume four bloom filters, each page is 1MB
○ Memory overhead is regardless of page key type (currently {string, long})
● Can further reduce by using HyperLogLog, but then not support infinite size
hit ratio estimation
13
14. Implementation
● Guava BloomFilter lib
● Automatically select the Bloom filter config (bits, #hash) by user-defined
memory overhead budget, and shadow cache window
● Support working set size in terms of #pages and #byte
● Support infinite size byte hit ratio and object hit ratio
14
15. Usage
#The past window to define the working set
alluxio.user.client.cache.shadow.window=24h
#The total memory overhead for bloom filters used for
tracking
alluxio.user.client.cache.shadow.memory.overhead=125MB
#The number of bloom filters used for tracking. Each
tracks a segment of window
alluxio.user.client.cache.shadow.bloomfilter.num=4
15
16. Conclusion
● We design Shadow cache: a lightweight Alluxio component to track the working
set size & infinite cache hit ratio
● Code merged:
https://github.com/Alluxio/alluxio/blob/master/core/client/fs/src/main/java/
alluxio/client/file/cache/CacheManagerWithShadowCache.java
● Many optimization opportunities
16
19. Motivation
1. We want to understand if a cluster is bounded by cache storage, Is
adding more storage going to help with cache hit rate and thus help with
query latency
2. It would also be useful to explore the potential improvement in caching
algorithms
3. We want to optimize the routing algorithm for better balance and
efficiency
19
20. Presto Routing for raptorX
● We shard the cache based on table name among clusters
● Query that access the same table will always go to the same target cluster to
maximize its cache
20
22. Options for optimizing routing logic
● Secondary cluster
○ when the primary cluster is busy, have a designated secondary cluster which will also have the
cache turned on for those queries
○ it requires storing additional tables cache on each cluster
● Two clusters both serving as designated primary, and do load balancing between
those two primary clusters
○ Cache disk usage -> X2
● Shuffle the tables between clusters to make the CPU distribution more even
based on query pattern.
○ it could make cache storage distribution not even and requires extra cache space
22
23. Key metrics on shadow cache
● Shadow cache is able to give us insights on the cache working set and how
cache hit rate would look like if we have infinite cache space.
● C1: Real Cache usage at a certain point of time
● C2: Shadow cache working set in a time window (1 day / 1 week)
● H1: Real Cache hit-rate
● H2: Shadow cache hit-rate
23