Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen. ×

5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop

Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Wird geladen in …3
×

Hier ansehen

1 von 64 Anzeige

5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop

Herunterladen, um offline zu lesen

In this session, learn how to quickly supplement your on-premises Hadoop environment with a simple, open, and collaborative cloud architecture that enables you to generate greater value with scaled application of analytics and AI on all your data. You will also learn five critical steps for a successful migration to the Databricks Lakehouse Platform along with the resources available to help you begin to re-skill your data teams.

In this session, learn how to quickly supplement your on-premises Hadoop environment with a simple, open, and collaborative cloud architecture that enables you to generate greater value with scaled application of analytics and AI on all your data. You will also learn five critical steps for a successful migration to the Databricks Lakehouse Platform along with the resources available to help you begin to re-skill your data teams.

Anzeige
Anzeige

Weitere Verwandte Inhalte

Diashows für Sie (20)

Ähnlich wie 5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop (20)

Anzeige

Weitere von Databricks (20)

Anzeige

5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop

  1. 1. Clean Your Data Swamp By Migration off Hadoop
  2. 2. Speaker Ron Guerrero Senior Solutions Architect
  3. 3. Agenda ● Why modernize? ● Planning your migration off of Hadoop ● Top migration topics
  4. 4. Why migrate off of Hadoop and onto Databricks?
  5. 5. History of Hadoop ● Created 2005 ● Open Source distributed processing and storage platform running on commodity hardware ● Originally consisted of HDFS, and MapReduce, but now incorporates numerous open source projects (Hive, HBase, Spark) ● On-prem and on the cloud
  6. 6. COMPLEX FIXED Today Hadoop is very hard ● Many tools: Need to understand multiple technologies. ● Real-time and batch ingestion to build AI models requires integrating many components. Slow Innovation ● 24/7 clusters. ● Fixed capacity: CPU + RAM + Disk. ● Costly to upgrade. Cost Prohibitive MAINTENANCE INTENSIVE ● Hadoop ecosystem is complex and hard to manage that is prone to failures. Low Productivity X
  7. 7. Enterprises Need a Modern Data Analytics Architecture CRITICAL REQUIREMENTS Cost-effective scale and performance in the cloud Easy to manage and highly reliable for diverse data Predictive and real-time insights to drive innovation
  8. 8. Structured Semi-structured Unstructured Streaming Lakehouse Platform Data Engineering BI & SQL Analytics Real-time Data Applications Data Science & Machine Learning Data Management & Governance Open Data Lake SIMPLE OPEN COLLABORATIVE
  9. 9. Planning your migration off of Hadoop and onto Databricks
  10. 10. Migration Planning ● Internal Questions ● Assessment ● Technical Planning ● Enablement and Evaluation ● Migration Execution
  11. 11. Migration Planning Internal Question ● why? ● who? ● desired start and end dates ● internal stakeholders ● cloud strategy
  12. 12. Migration Planning Assessment ● Environment inventory ○ compute, data, tooling ● Use case prioritization ● Workload analysis ● Existing TCO ● Projected TCO ● Migration timelines
  13. 13. Migration Planning Technical Planning ● Target state architecture ● Data migration ● Workload migration ○ Lift and shift, transformative, hybrid ● Data governance approach ● Automated deployment ● Monitoring and Operations
  14. 14. Migration Planning Enablement and Evaluation ● Workshops,Technical deep dives ● Training ● Proof of technology / MVP ○ Validate assumptions and designs
  15. 15. Migration Planning Migration Execution ● Environment Deployment ● Iterate of use cases ○ Data Migration ○ Workload Migration ○ Dual Production Deployment - Old and New ○ Validation ○ Cut-over and Decommission of Hadoop
  16. 16. Top Migration Topics
  17. 17. Key Areas of Migration 1. Administration 2. Data Migration 3. Data Processing 4. Security & Governance 5. SQL and BI Layer
  18. 18. Administration
  19. 19. Hadoop Ecosystem to Databricks Concepts Hadoop HDFS c disk1 disk2 disk3 disk4 disk5 disk6 ... disk N YARN Impala HBase c c c c c MR mapper c MR mapper c MR mapper c Spark Worker (Executor ) c c c c MR mapper c Spark Worker (Executor ) c c c c Spark Worker (Executor ) c c c c c c 2x12c = 24c compute HDFS c disk1 disk2 disk3 disk4 disk5 disk6 ... disk N YARN Impala HBase c c c c c MR mapper c MR mapper c MR mapper c Spark Worker (Executor ) c c c c MR mapper c Spark Worker (Executor ) c c c c Spark Worker (Executor ) c c c c c c 2x12c = 24c compute HDFS c disk1 disk2 disk3 disk4 disk5 disk6 ... disk N YARN Impala HBase c c c c c MR mapper c MR mapper c MR mapper c Spark Worker (Executor ) c c c c MR mapper c Spark Worker (Executor ) c c c c Spark Driver c c c c c c 2x12c = 24c compute ... Node 1 Node 2 Node N Hive Metastore Hive Server Impala (LoadBalancer) HBase API Sentry Table Metadata + HDFS ACLs JDBC/ODBC Node makeup ▪ Local disks ▪ Cores/Memory carved to services ▪ Submitted jobs compete for resources ▪ Services constrained to accommodate resources Metadata and Security ▪ Sentry table metadata permissions combined with syncing HDFS ACLs OR ▪ Apache Ranger, policy based access control Endpoints ▪ Direct Access to HDFS / Copied dataset ▪ Hive (on MR or Spark) accepts incoming connections ▪ Impala for interactive queries ▪ HBase APIs as required Ranger Policy based access control OR
  20. 20. Hadoop Ecosystem to Databricks Concepts Hadoop HDFS c disk1 disk2 disk3 disk4 disk5 disk6 ... disk N YARN Impala HBase c c c c c MR mapper c MR mapper c MR mapper c Spark Worker (Executor ) c c c c MR mapper c Spark Worker (Executor ) c c c c Spark Worker (Executor ) c c c c c c 2x12c = 24c compute HDFS c disk1 disk2 disk3 disk4 disk5 disk6 ... disk N YARN Impala HBase c c c c c MR mapper c MR mapper c MR mapper c Spark Worker (Executor ) c c c c MR mapper c Spark Worker (Executor ) c c c c Spark Worker (Executor ) c c c c c c 2x12c = 24c compute HDFS c disk1 disk2 disk3 disk4 disk5 disk6 ... disk N YARN Impala HBase c c c c c MR mapper c MR mapper c MR mapper c Spark Worker (Executor ) c c c c MR mapper c Spark Worker (Executor ) c c c c Spark Driver c c c c c c 2x12c = 24c compute ... Node 1 Node 2 Node N Hive Metastore Hive Server Impala (LoadBalancer) HBase API Sentry/Ranger Table Metadata + HDFS ACLs Hive Metastore (managed) Databricks SQL Endpoint JDBC/ODBC High Conc. Cluster SQL Analytics CosmosDB/ DynamoDB/ Keyspaces Object Storage c Spark Worker (Executor ) c c c Delta Engine c Spark Driver c c c c Spark Worker (Executor ) c c c Delta Engine c Spark Worker (Executor ) c c c Delta Engine Databricks Cluster Spark ETL (Batch/Streaming) c Spark Worker (Executor ) c c c Delta Engine c Spark Driver c c c c Spark Worker (Executor ) c c c Delta Engine c Spark Worker (Executor ) c c c Delta Engine Databricks Cluster SQL Analytics c Spark Worker (Executor ) c c c Delta Engine c Spark Driver c c c c Spark Worker (Executor ) c c c Delta Engine c Spark Worker (Executor ) c c c Delta Engine Databricks Cluster ML Runtime Table ACLs Object Storage ACLs Ephemeral Clusters for All-purpose or Jobs JDBC/ODBC
  21. 21. Hadoop Ecosystem to Databricks Concepts Hive Metastore (managed) Databricks SQL Endpoint High Conc. Cluster SQL Analytics c Spark Worker (Executor ) c c c Delta Engine c Spark Driver c c c c Spark Worker (Executor ) c c c Delta Engine c Spark Worker (Executor ) c c c Delta Engine Databricks Cluster Spark ETL (Batch/Streaming) c Spark Worker (Executor ) c c c Delta Engine c Spark Driver c c c c Spark Worker (Executor ) c c c Delta Engine c Spark Worker (Executor ) c c c Delta Engine Databricks Cluster SQL Analytics c Spark Worker (Executor ) c c c Delta Engine c Spark Driver c c c c Spark Worker (Executor ) c c c Delta Engine c Spark Worker (Executor ) c c c Delta Engine Databricks Cluster ML Runtime Table ACLs Ephemeral Clusters or long running for All-purpose or Jobs JDBC/ODBC Node makeup ▪ Each Node (VM), maps to single Spark Driver/Worker ▪ Cluster of nodes completely isolated from other jobs/compute ▪ De-coupled compute and storage Metadata and Security ▪ Managed Hive metastore (other options available) ▪ Table ACLs (Databricks) and Object Storage permissions Endpoints ▪ SQL endpoint for both advanced analytics and simple SQL analytics ▪ Code access to data - Notebooks ▪ HBase → maps to Azure CosmosDB, AWS DynamoDB/Keyspaces (non-Databricks solution) Object Storage Object Storage ACLs CosmosDB/ DynamoDB/ Keyspaces
  22. 22. Demo - Administration
  23. 23. Data Migration
  24. 24. Data Migration - On-premise block storage. - Fixed disk capacity. - Health checks to validate data Integrity. - As data volumes grow, must add more nodes to cluster and rebalance data. MIGRATE - Fully managed cloud object storage. - Unlimited capacity. - No maintenance, no health checks, no rebalancing. - 99.99% availability, 99.9999999% durability. - Use native cloud services to migrate data. - Leverage partner solutions:
  25. 25. Data Migration Build a Data Lake in cloud storage with Delta Lake ● Open source and uses Parquet file format. ● Performance: Data indexing → Faster queries. ● Reliability: ACID Transactions → Guaranteed data integrity. ● Scalability: Handle petabyte-scale tables with billions of partitions and files at ease. ● Enhanced Spark SQL: UPDATE, MERGE, and DELETE commands. ● Unify Batch and Stream processing → No more LAMBDA architecture. ● Schema Enforcement: Specify schema on write. ● Schema Evolution: Automatically change schemas on the fly. ● Audit History: Full audit trail of the changes. ● Time Travel: Restore data from past versions. ● 100% Compatible with Apache Spark API.
  26. 26. Start with Dual ingestion ● Add a feed to cloud storage ● Enable new use cases with new data ● Introduces options for backup
  27. 27. How to migrate data ● Leverage existing Data Delivery tools to point to cloud storage ● Introduce simplified flows to land data into cloud storage
  28. 28. How to migrate data ● Push the data ○ DistCP ○ 3rd Party Tooling ○ In-house frameworks ○ Cloud Native - Snowmobile , Azure Data Box, Google Transfer Appliance ○ Typically easier to approve (security) ● Pull the data ○ Spark Streaming ○ Spark Batch ■ File Ingest ■ JDBC ○ 3rd Party Tooling
  29. 29. How to migrate data - Pull approach ● Set up connectivity to On Premises ○ AWS Direct Connect ○ Azure ExpressRoute / VPN Gateway ○ This may be needed for some use cases ● Kerberized Hadoop Environments ○ Databricks clusters initialization scripts ■ Kerberos client setup ■ krb5.conf, keytab ■ kinit() ● Shared External Metastore ○ Databricks and Hadoop can share a metastore
  30. 30. Demo - Databricks Pull
  31. 31. Data Processing
  32. 32. Technology Mapping
  33. 33. Migrating Spark Jobs ● Spark versions ● RDD to Dataframes ● Changes to submission ● Hard coded references to hadoop environment
  34. 34. Converting non-Spark workloads ● MapReduce ● Sqoop ● Flume ● Nifi Considerations
  35. 35. Migrating HiveQL ● Hive queries have high compatibility ● Minor changes in DDL ● Serdes, and UDFs
  36. 36. Migration Workflow Orchestration ● Create Airflow, Azure Data Factory, or other, equivalents ● Databricks REST APIs allows integration to any Scheduler
  37. 37. Automated Tooling ● MLens ○ PySpark ○ HiveQL ○ Oozie to Airflow, Azure Data Factory (roadmap)
  38. 38. Security and Governance
  39. 39. Security and Governance Authentication Authorization Metadata Management - Single Sign On (SSO) with SAML 2.0 supported corporate directory. - Access Control Lists (ACLs) for Databricks RBAC. - Table ACLs - Dynamic Views for Column/Row permissionons - Leverage cloud native security: IAM Federation and AAD passthrough. - Integration with Ranger an Immuta for more advanced RBAC and ABAC. - Integration with 3rd party services. Amazon Glue
  40. 40. Pivacera
  41. 41. Migrating Security Policies from Hadoop to Databricks Enabling enterprises to responsibly use their data in the cloud Powered by Apache Ranger
  42. 42. HADOOP ECOSYSTEM ● 100s and 1000s of tables in Apache Hive ● 100s of policies in Apache Ranger ● Variety of policies. Resource Based, Tag Based, Masking, Row Level Filters, etc. ● Policies for Users and Groups from AD/LDAP
  43. 43. PRIVACERA AND DATABRICKS Hive MetaStore MetaStore Dataset Schema Policies
  44. 44. SEAMLESS MIGRATION INSTANTLY TRANSFER YEARS OF EFFORT INSTANTLY IMPLEMENT THE SAME POLICIES IN DATABRICKS AS ON-PREM
  45. 45. ● Richer, deeper, and more robust Access Control ● Row/Column level access control in SQL ● Dynamic and Static data de-identification ● File level access control for Dataframes, object level access ● Read/Write operations supported Object Store (S3/ADLS) Privacera + Databricks S3 - Bucket Level Y S3 - Object Level Y ADLS Y Privacera Value Add - Enhancing Databricks Authorization Spark SQL and R Privacera + Databricks Table Y Column Y Column Masking Y Row Level Filtering Y Tag Based Policies Y Attribute based policies Y Centralized Auditing Y
  46. 46. Databricks SQL/Python Cluster Spark Driver Ranger Plugin Spark Executors Spark Executors Ranger Policy Manager Privacera Portal Privacera Audit Server DB Solr Apache Kafka Splunk Cloud Watch SIEM Privacera Cloud Spark SQL and/or Spark Read/Write Privacera Anomaly Detection and Alerting Databricks Cluster Privacera Discovery Business User Admin User Privacera Approval Workflow AD/LDAP 3rd Party Catalog
  47. 47. SQL and BI
  48. 48. What about the SQL Community Hadoop ● HUE ○ Data browsing ○ SQL Editor ○ Visualizations ● Interactive SQL ○ Impala ○ Hive LLAP Databricks ● SQL Analytics Workspace ○ Data Browser ○ SQL Editor ○ Visualizations ● Interactive SQL ○ Spark optimizations - Adaptive Query Execution ○ Advanced Caching ○ Project Photon ○ Scaling cluster of clusters
  49. 49. SQL & BI Layer Optimized SQL and BI Performance BI Integrations Tuned - Fast queries with Delta Engine on Delta Engine. - Support for high-concurrency with auto-scaling clusters. - Optimized JDBC/ODBC drivers. - Optimized and tuned for BI and and SQL out of the box. Compatible with any BI client and tool that supports Spark.
  50. 50. Vision Give SQL users a home in Databricks Provide SQL workbench, light dashboarding, and alerting capabilities Great BI experience on the data lake Enable companies to effectively leverage the data lake from any BI tool without having to move the data around. Easy to use & price-performant Minimal setup & configuration. Data lake price performance.
  51. 51. SQL-native user interface for analysts ▪ Familiar SQL Editor ▪ Auto Complete ▪ Built in visualizations ▪ Data Browser ▪ Automatic Alerts ▪ Trigger based upon values ▪ Email or Slack integration ▪ Dashboards ▪ Simply convert queries to dashboards ▪ Share with Access Control
  52. 52. Built-in connectors for existing BI tools Other BI & SQL clients that support ▪ Supports your favorite tool ▪ Connectors for top BI & SQL clients ▪ Simple connection setup ▪ Optimized performance ▪ OAuth & Single Sign On ▪ Quick and easy authentication experience. No need to deal with access tokens. ▪ Power BI Available now ▪ Others coming soon
  53. 53. Performance Delta Metadata Performance Improved read performance for cold queries on Delta tables. Provides interactive metadata performance regardless of # of Delta tables in a query or table sizes. New ODBC / JDBC Drivers Wire protocol re-engineered to provide lower latencies & higher data transfer speeds: ▪ Lower latency / less overhead (~¼ sec) with reduced round trips per request ▪ Higher transfer rate (up to 50%) using Apache Arrow ▪ Optimized metadata performance for ODBC/JDBC APIs (up to 10x for metadata retrieval operations) Photon - Delta Engine [Preview] New MPP engine built from scratch in C++. Vectorized to exploit data level parallelism and instruction-level parallelism. Optimized for modern structured and semi-structured workloads.
  54. 54. Summary
  55. 55. It all starts with a plan ● Databricks and are partner community can help you ○ Assess ○ Plan ○ Validate ○ Execute
  56. 56. Considerations for your migration to Databricks ● Administration ● Data Migration ● Data Processing ● Security & Governance ● SQL and BI Layer
  57. 57. Next Steps
  58. 58. Next Steps ● You will receive a follow up email from our teams ● Let us help you with your Hadoop Migration Journey
  59. 59. Follow up materials - Useful links
  60. 60. Databricks Reference Architecture
  61. 61. Databricks Azure Reference Architecture
  62. 62. Databricks AWS Reference Architecture
  63. 63. Demo

×