Más contenido relacionado

Similar a Data Lakehouse, Data Mesh, and Data Fabric (r1)(20)


Data Lakehouse, Data Mesh, and Data Fabric (r1)

  1. Data Lakehouse, Data Mesh, and Data Fabric (the alphabet soup of data architectures) James Serra Data Platform Architecture Lead EY Blog:
  2. About Me  EY, Data Platform Architecture Lead  Was previously a Data & AI Architect at Microsoft for seven years  In IT for 35 years, worked on many BI and DW projects  Worked as desktop/web/database developer, DBA, BI and DW architect and developer, MDM architect, PDW/APS developer  Been perm employee, contractor, consultant, business owner  Presenter at PASS Summit, SQLBits, Enterprise Data World conference, Big Data Conference Europe, SQL Saturdays  Blog at  Former SQL Server MVP  Author of book “Reporting with Microsoft SQL Server 2012”
  3. Agenda  Data Warehouse  Data Lake  Modern Data Warehouse  Data Fabric  Data Lakehouse  Data Mesh
  4. I tried to figure out all these data platform buzzwords on my own… And ended up passed-out drunk in a Denny’s parking lot Let’s prevent that from happening…
  5. What is a Data Warehouse and why use one? A data warehouse is where you store data from multiple data sources to be used for historical and trend analysis reporting. It acts as a central repository for many subject areas and contains the "single version of truth". It is NOT to be used for OLTP applications. Reasons for a data warehouse:  Reduce stress on production system  Optimized for read access, sequential disk scans  Integrate many sources of data  Keep historical records (no need to save hardcopy reports)  Restructure/rename tables and fields, model data  Protect against source system upgrades  Use Master Data Management, including hierarchies  No IT involvement needed for users to create reports  Improve data quality and plugs holes in source systems  One version of the truth  Easy to create BI solutions on top of it (i.e. Azure Analysis Services Cubes)  Don’t need to provide security access for many users to the production systems Why You Need a Data Warehouse
  6. Observation Pattern Theory Hypothesis What will happen? How can we make it happen? Predictive Analytics Prescriptive Analytics What happened? Why did it happen? Descriptive Analytics Diagnostic Analytics Confirmation Theory Hypothesis Observation Two Approaches to getting value out of data: Top-Down + Bottoms-Up
  7. Implement Data Warehouse Physical Design ETL Development Reporting & Analytics Development Install and Tune Reporting & Analytics Design Dimension Modelling ETL Design Setup Infrastructure Understand Corporate Strategy Data Warehousing Uses A Top-Down Approach Data sources Gather Requirements Business Requirements Technical Requirements
  8. The “data lake” Uses A Bottoms-Up Approach Ingest all data regardless of requirements Store all data in native format without schema definition Do analysis Using analytic engines like Hadoop Interactive queries Batch queries Machine Learning Data warehouse Real-time analytics Devices
  9. Data Lake + Data Warehouse Better Together Data sources What happened? Descriptive Analytics Diagnostic Analytics Why did it happen? What will happen? Predictive Analytics Prescriptive Analytics How can we make it happen?
  10. What is a data lake and why use one? A schema-on-read storage repository that holds a vast amount of raw data in its native format until it is needed. Reasons for a data lake: • Inexpensively store unlimited data • Centralized place for multiple subjects (single version of the truth) • Collect all data “just in case” (data hoarding). The data lake is a good place for data that you “might” use down the road • Easy integration of differently-structured data • Store data with no modeling – “Schema on read” • Complements enterprise data warehouse (EDW) • Frees up expensive EDW resources for queries instead of using EDW resources for transformations (avoiding user contention) • Wanting to use technologies/tools (i.e Databricks) to refine/filter data that do the refinement quicker/better than your EDW • Quick user access to data for power users/data scientists (allowing for faster ROI) • Data exploration to see if data valuable before writing ETL and schema for relational database, or use for one-time report • Allows use of Hadoop tools such as ETL and extreme analytics • Place to land IoT streaming data • On-line archive or backup for data warehouse data • With Hadoop/ADLS, high availability and disaster recovery built in • It can ingest large files quickly and provide data redundancy • ELT jobs on EDW are taking too long because of increasing data volumes and increasing rate of ingesting (velocity), so offload some of them to the Hadoop data lake • Have a backup of the raw data in case you need to load it again due to an ETL error (and not have to go back to the source). You can keep a long history of raw data • Allows for data to be used many times for different analytic needs and use cases • Cost savings and faster transformations: storage tiers with lifecycle management; separation of storage and compute resources allowing multiple instances of different sizes working with the same data simultaneously vs scaling data warehouse; low-cost storage for raw data saving space on the EDW • Extreme performance for transformations by having multiple compute options each accessing different folders containing data • The ability for an end-user or product to easily access the data from any location
  11. Data Warehouse Serving, Security & Compliance • Business people • Low latency • Complex joins • Interactive ad-hoc query • High number of users • Additional security • Large support for tools • Dashboards • Easily create reports (Self-service BI) • Know questions
  12. Enterprise Data Maturity Stages Structured data is transacted and locally managed. Data used reactively STAGE 2: Informative STAGE 1: Reactive Structured data is managed and analyzed centrally and informs the business Data capture is comprehensive and scalable and leads business decisions based on advanced analytics STAGE 4: Transformative STAGE 3: Predictive Data transforms business to drive desired outcomes. Real-time intelligence Rear-view mirror Any data, any source, anywhere at scale
  13. Modern Data Warehouse Whiteboarding Modern Data Warehouse explained
  14. Modern Data Warehouse
  15. Data Fabric Data Fabric adds to a modern data warehouse: • Data access • Data policies • Metadata catalog • MDM • Data virtualization • Data scientist tools • APIs • Building blocks/Services • Products Data Fabric defined
  16. Data Lakehouse
  17. Delta Lake Top features: • ACID transactions • Time travel (data versioning enables rollbacks, audit trail) • Streaming and batch unification • Schema enforcement • Upserts and deletes • Performance improvement Databricks Delta Lake
  18. Use cases for Data Lakehouse Today’s data architectures commonly suffer from four problems: • Reliability: Keeping the data lake and warehouse consistent • Data staleness: Data in warehouse is older • Limited support for advanced analytics: Top ML systems don’t work well on warehouses • Total cost of ownership: Extra cost for data copied to warehouse Lakehouse: A New Generation of Open Platforms that Unify Data Warehousing and Advanced Analytics
  19. Concerns skipping relational database • Speed: Relational databases faster, especially MPP • Security: No RLS, column-level, dynamic data masking • Complexity: Metadata separate from data, file-based world • Missing features: Referential integrity, TDE, workload management; other features require locked into Spark Data Lakehouse & Synapse
  20. Data Mesh
  21. Data Mesh Credit to Zhamak Dehghani It’s a mindset shift where you go from: • Centralized ownership to decentralized ownership • Pipelines as first-class concern to domain data as first-class concern • Data as a by-product to data as a product • A siloed data engineering team to cross- functional domain-data teams • A centralized data lake/warehouse to an ecosystem of data products
  22. Use cases for Data Mesh Data mesh tries to solve four challenges with a centralized data lake/warehouse: • Lack of ownership: who owns the data – the data source team or the infrastructure team? • Lack of quality: the infrastructure team is responsible for quality but does not know the data well • Organizational scaling: the central team becomes the bottleneck, such as with an enterprise data lake/warehouse • Technical scaling: current big data solutions can’t keep up with additional data requirements
  23. Concerns with Data Mesh • No standard definition of a data mesh • Huge investment in organizational change and technical implementation • Performance of combining data from multiple domains • Duplication of data for performance reasons • Getting quality engineering people for each domain • Inconsistent technical implementations for the domains • Domains don’t want to wait for a data mesh • Need incentives for each domain to counter extra work • Self-serve approach of data requests could be challenging • Duplication of data and ingestion platform • Creation of data silos for domains not able to join data mesh • Not seeing the big picture for combing data Data Mesh: Centralized vs decentralized data architecture Data Mesh: Centralized ownership vs decentralized ownership
  24. Microsoft Data Mesh
  25. Microsoft Harmonized Mesh
  26. Data Fabric vs Data Mesh If Data Fabric uses data virtualization, how is it different from Data Mesh: • Usually only some of the data is virtualized, so still mostly centralized • Not making data as a product • Still have siloed data engineering team
  27. Q & A ? James Serra, Data Platform Architecture Lead Email me at: Follow me at: @JamesSerra Link to me at: Visit my blog at:

Hinweis der Redaktion

  1. So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric.  What do all these terms mean and how do they compare to a data warehouse?  In this session I’ll cover all of them in detail and compare the pros and cons of each.  I’ll include use cases so you can see what approach will work best for your big data needs.
  2. Fluff, but point is I bring real work experience to the session
  4. One version of truth story: different departments using different financial formulas to help bonus This leads to reasons to use BI. This is used to convince your boss of need for DW Note that you still want to do some reporting off of source system (i.e. current inventory counts). It’s important to know upfront if data warehouse needs to be updated in real-time or very frequently as that is a major architectural decision JD Edwards has tables names like T117
  6. Top down starts with descriptive analytics and progresses to prescriptive analytics. Know the questions to ask. Lot’s of upfront work to get data to where you can use it Bottoms up starts with predictive analytics. Don’t know the questions to ask. Little work needs to be done to start using data There are two approaches to doing information management for analytics: Top-down (deductive approach). This is where analytics is done starting with a clear understanding of corporate strategy where theories and hypothesis are made up front. The right data model is then designed and implemented prior to any data collection. Oftentimes, the top-down approach is good for descriptive and diagnostic analytics. What happened in the past and why did it happen? Bottom-up (inductive approach). This is the approach where data is collected up front before any theories and hypothesis are made. All data is kept so that patterns and conclusions can be derived from the data itself. This type of analysis allows for more advanced analytics such as doing predictive or prescriptive analytics: what will happen and/or how can we make it happen? In Gartner’s 2013 study, “Big Data Business Benefits Are Hampered by ‘Culture Clash’”, they make the argument that both approaches are needed for innovation to be successful. Oftentimes what happens in the bottom-up approach becomes part of the top-down approach. .
  7. Also called bit bucket, staging area, landing zone or enterprise data hub (Cloudera)
  8. Adam: 2 min/11 total Let’s expand on this concept of leaders versus laggards just a bit. There are different stages of enterprise data maturity as we see on this slide. Organizations go through several stages in this process, from being reactive or informative with data to being predictive and transformative with data. And with every step that an organization takes along these stages, their ability to be successful in digital transformation accelerates. The reason for this acceleration is simple and to me, the secret is found in the seven most important words on this slides – the seven words that define the transformative end of the spectrum here – are “any data, any source, anywhere at scale”. This is an essential and an ambitious goal for any organization. What about third-party governmental data about demographics and income? Yes, any data. How about data formats that you have not seen before which come from systems coming across from a recent acquisitions? Yes, any source. What about data generated by devices that are only intermittently connected to the internet? Yes, anywhere. How about data that comes in 100 times as fast as it ever came in before because a movie star mentioned your product or service? Yes, at scale. The more data that customers bring to the cloud and make available for AI, the more successful they can become. As customers increasingly realize this, they start to lever AI more and more, creating a demand pipeline for additional data to go to the cloud. Let’s drill down on that next.
  9. Data Fabric adds: data access, data policies, data catalog, MDM, data virtualization, data scientist tools, APIs, building blocks, products
  10. Data Fabric adds: data access, data policies, data catalog, MDM, data virtualization, data scientist tools, APIs, building blocks, products
  11.  Delta Lake, Apache Hudi or Apache Iceberg (see A Thorough Comparison of Delta Lake, Iceberg and Hudi),
  12. Reliability. Keeping the data lake and warehouse consistent is difficult and costly. Continuous engineering is required to ETL data between the two systems and make it available to high-performance decision support and BI. Each ETL step also risks incurring failures or introducing bugs that reduce data quality, e.g., due to subtle differences between the data lake and warehouse engines. Data staleness. The data in the warehouse is stale compared to that of the data lake, with new data frequently taking days to load. This is a step back compared to the first generation of analytics systems, where new operational data was immediately available for queries. According to a survey by Dimensional Research and Fivetran, 86% of analysts use out-of-date data and 62% report waiting on engineering resources numerous times per month [47]. Limited support for advanced analytics. Businesses want to ask predictive questions using their warehousing data, e.g., “which customers should I offer discounts to?” Despite much research on the confluence of ML and data management, none of the leading machine learning systems, such as TensorFlow, PyTorch and XGBoost, work well on top of warehouses. Unlike BI queries, which extract a small amount of data, these systems need to process large datasets using complex non-SQL code. Reading this data via ODBC/JDBC is inefficient, and there is no way to directly access the internal warehouse proprietary formats. For these use cases, warehouse vendors recommend exporting data to files, which further increases complexity and staleness (adding a third ETL step!). Alternatively, users can run these systems against data lake data in open formats. However, they then lose rich management features from data warehouses, such as ACID transactions, data versioning and indexing. Total cost of ownership. Apart from paying for continuous ETL, users pay double the storage cost for data copied to a warehouse, and commercial warehouses lock data into proprietary formats that increase the cost of migrating data or workloads to other systems
  13. Speed: Queries against a relational storage will always be faster than against a data lake (roughly 5X) because of missing features in the data lake such as the lack of statistics, query plans, result-set caching, materialized views, in-memory caching, SSD-based caches, indexes, and the ability to design and align data and tables. Counter: DirectParquet, CSV 2.0, query acceleration, predict pushdown, and sql on-demand auto-scaling are some of the features that can make queries against ADLS be nearly as fast as a relational database.  Then there are features like Delta lake and the ability to use statistics for external tables that can add even more performance. Plus you can also import the data into Power BI, use Power BI aggregation tables, or import the data into Azure Analysis Services to get even faster performance. Another thing to keep in mind affecting query performance is Synapse is a Massive parallel processing (MPP) technology that has features such as replicated tables for smaller tables (i.e. dimension tables) and distributed tables for large tables (i.e. fact tables) with the ability to control how they are distributed across storage (hash, round-robin). This could provide much greater performance compared to a data lake that uses HDFS where large files are chunked across the storage Security: Row-level security (RLS), column-level security, dynamic data masking, and data discovery & classification are security-related features that are not available in a data lake. Counter: User RLS in Power BI or RLS on external tables instead of RLS on a database table, which then allows you to use result set caching in Synapse Complexity: Schema-on-read (ADLS) is more complex to query than schema-on-write (relational database). Schema-on-read means the end-user must define the metadata, where with schema-on-write the metadata was stored along with the data. Then there is the difficulty in querying in a file-based world compared to a relational database world. Counter: Create a SQL relational view on top of files in the data lake so the end-user does not have to create the metadata, which will make it appear to the end-user that the data is in a relational database. Or you could import the data from the data lake into Power BI, creating a star schema model in a Power BI dataset. But I still see it being very difficult to manage a solution with just a data lake when you have data from many sources. Having the metadata along with the data in a relational database allows everyone to be on the same page as to what the data actually means, versus more of a wild west with a data lake Missing features: Auditing, referential integrity, ACID compliance, updating/deleting rows of data, data caching, Transparent Data Encryption (TDE), workload management, full support of T-SQL – all are not available in a data lake. Counter: some of these features can be accomplished when using Delta Lake, Apache Hudi or Apache Iceberg (see A Thorough Comparison of Delta Lake, Iceberg and Hudi), but will not be as easy to implement as a relational database and you will be locked into using Spark. Also, features being added to Blob Storage (see More Azure Blob Storage enhancements) can be used instead of resorting to Delta Lake, such as blob versioning as a replacement for time travel in Delta Lake
  14. What is Data Mesh? Data Mesh – an approach founded by Zhamak Dehghani – refers to a decentralized, distributed approach to enterprise data management. It is a holistic concept that sees different datasets as distributed products, orientated around domains. The idea is that each domain-specific dataset has its own embedded engineers and product owners to manage that data and its availability to other teams, driving a level of data ownership and responsibility, which is often lacking in the current data platforms that are largely centralised, monolithic, and often built around complex pipelines. MDW: missing is data access, data policies, data catalog, MDM, data virtualization, data scientist, APIs, building blocks, products Data fabric: HR, finance, payroll, operations Concerning data virtualization, if a data fabric uses data virtualization to keep data in place, then I would say a data fabric and a data mesh are the same thing. Maybe a difference is a data mesh has standards/frameworks on how each domain handles its data, treating data as a product, where a data fabric does not have that. Data lakehouse: tradeoffs: speed, security, no metadata, extra complexity, MDM, referential integrity Leading edge: security/ABAC, data ingestion, cataloging, size of 3rd-party data sources, using it to feed products, data explorer/marketplace, GSL, data virtualization, APIs Other companies (generalities): most are at MDW, depends on size Microsoft: Enterprise Scale Analytics and AI MS relationship: Advancing capabilities, giving feedback, executive awareness Microsoft gaps: ABAC, MDM, data virtualization (denoto, dremio, faxes) Removed: How TDF meets business goals and strategic objectives How TDF addresses stakeholder concerns