Sophisticated data instrumentation and collection technologies are leading to unprecedented growth. Data-driven organizations need to be able to scalably store data and perform complex data processing on the collected data(i.e. not just "queries"). Given the unstructured nature of the source data, and the need to stay agile, organizations also need to be able to change their schemas dynamically (at read-time vs write-time). Apache Hadoop is an open-source distributed fault-tolerant system that leverages commodity hardware to achieve large-scale agile data storage and processing. In this presentation, Dr. Amr Awadallah will introduce the design principles behind Apache Hadoop and explain the architecture of its core sub-systems (the Hadoop Distributed File System and MapReduce). Amr will also contrast Hadoop to relational database systems and illustrate how they truly complement each other. Finally, Amr will cover the Hadoop ecosystem at large which includes a number of projects that together form a cohesive Data Operating System for the modern data center.
Unlocking the Power of ChatGPT and AI in Testing - A Real-World Look, present...
Introducing Apache Hadoop: The Modern Data Operating System - Stanford EE380
1. 11/16/2011, Stanford EE380 Computer Systems Colloquium
Introducing Apache Hadoop:
The Modern Data Operating System
Dr. Amr Awadallah | Founder, CTO, VP of Engineering
aaa@cloudera.com, twitter: @awadallah