3. The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remain at the sole discretion of Oracle.
6. “ Every query was faster on Exadata compared to our current systems. The smallest performance improvement was 10x and the biggest one was an incredible 72x .” Simeon Dimitrov, Enterprise Resources Manager “ Call Data Record queries that used to run for over 30 minutes now complete in under 1 minute . That's extreme performance.” Grant Salmon , CEO, LGR Telecommunications “ A query that used to take 24 hours now runs in less than 30 minutes . The Oracle Database Machine beats competing solutions on bandwidth, load rate, disk capacity, and transparency . ” Christian Maar, CIO Database Machine Success
7. “ The Oracle Database Machine is an ideal cost-effective platform to meet our speed and scalability needs . ” Ketan Parekh, Manager Database Systems “ After carefully testing several data warehouse platforms, we chose the Oracle Database Machine. Oracle Exadata was able to speed up one of our critical processes from days to minutes ..” Brian Camp, Sr. VP of Infrastructure Services Database Machine Success
8. Extreme Performance Gains Customer Benchmark Results Average Gain Customer Industry 20x 16x 15x Telecom 28x Retail Finance Telecom
9. “ When it comes to speed, Oracle Exadata technology has changed the game completely…..” Grant Salmon CEO LGR Telecommunications from Profit Magazine, February 2009
24. Sun Oracle Exadata Storage Server • 14 Sun Fire X4275 per rack • 5x faster than conventional storage • 2x more storage capacity • Simplifies storage to eliminate complex SAN architectures • Sun FlashFire Technology turbocharges applications
26. Sun FlashFire Technology Extreme Performance Accelerator • 10x better IO response time • 5.25 Terabytes Flash per rack • 1,000,000 IOPS per rack • 20x IOPS speedup for Oracle • Integrated super caps for data retention New
28. InfiniBand Network High Bandwidth, Low Latency • Sun Datacenter InfiniBand Switch 36 • Fully redundant non-blocking IO paths from servers to storage • 2.88 Tb/sec bi-sectional bandwidth per switch • 40 Gb/sec QDR, Dual port QSFP per server
30. Start Small and Grow Full Rack Half Rack Quarter Rack Basic System
31.
32.
33.
34.
35. Exadata Product Capacity 1 – Raw capacity calculated using 1 GB = 1000 x 1000 x 1000 bytes and 1 TB = 1000 x 1000 x 1000 x 1000 bytes. 2 - User Data: Actual space for end-user data, computed after single mirroring (ASM normal redundancy) and after allowing space for database structures such as temp, logs, undo, and indexes. Actual user data capacity varies by application. User Data capacity calculated using 1 TB = 1024 * 1024 * 10 24 * 1024 bytes. Single Server Quarter Rack Half Rack Full Rack Raw Disk 1 SAS 7.2 TB 21 TB 50 TB 100 TB SATA 24 TB 72 TB 168 TB 336 TB Raw Flash 1 384 GB 1.1 TB 2.6 TB 5.3 TB User Data 2 (assuming no compression) SAS 2 TB 6 TB 14 TB 28 TB SATA 7 TB 21 TB 50 TB 100 TB
36. Exadata Product Performance 1 – Bandwidth is peak physical disk scan bandwidth, assuming no compression. 2 - Max User Data Bandwidth assumes scanned data is compressed by factor of 10 and is on Flash. 3 – IOPs – Based on IO requests of size 8K 4 - Actual performance will vary by application. Single Server Quarter Rack Half Rack Full Rack Raw Disk Data Bandwidth 1,4 SAS 1.5 GB/s 4.5 GB/s 10.5 GB/s 21 GB/s SATA 0.85 GB/s 2.5 GB/s 6 GB/s 12 GB/s Raw Flash Data Bandwidth 1,4 3.6 GB/s 11 GB/s 25 GB/s 50 GB/s Max User Data Bandwidth 2,4 (10x compression & Flash) 36 GB/s 110 GB/s 250 GB/s 500 GB/s Disk IOPS 3,4 SAS 3,600 10,800 25,000 50,000 SATA 1,440 4,300 10,000 20,000 Flash IOPS 3,4 75,000 225,000 500,000 1,000,000 Data Load Rate 4 0.65 TB/hr 1 TB/hr 2.5 TB/hr 5 TB/hr
41. Exadata Smart Scan Query Example Exadata Storage Grid Oracle Database Grid SUM Optimizer Chooses Partitions and Indexes to Access 10 TB scanned 1 MB returned to servers What were my sales yesterday? Select sum(sales) where Date=’24-Sept’ Scan compressed blocks in partitions/indexes Retrieve sales amounts for Sept 24
53. Benefits Multiply with Compression 1 TB with compression 10 TB of user data Requires 10 TB of IO 100 GB with partition pruning 20 GB with Storage Indexes 5 GB Smart Scan on Memory or Flash Subsecond On Database Machine Data is 10x Smaller, Scans are 2000x faster
68. Active Data Guard and Low Cost DR Either Physical or Logical Standbys Can Be Opened Primary database Standby database Exadata Racks with SAS disks Exadata Rack with SATA disks Redo transport Oracle Net Backups Reporting
The Sun Oracle Database Machine Full Rack combines Sun Oracle Exadata Storage Servers with Oracle Database in a complete pre-optimized and pre-configured package of software, servers, and storage. Simple and fast to install, the Oracle Database Machine is ready to start tackling your business queries immediately out-of-the-box . The Sun Oracle Database Machine Full Rack is a building block and you can add more racks as your data warehouse grows. The Sun Oracle Database Machine Full Rack consists of 8 Database Servers 14 Sun Oracle Exadata Storage Servers 3 InfiniBand switches 1 Gigabit Ethernet switch KVM Oracle is the first point of contact for all hardware & software issues and will manage the problem to resolution.
Infiniband throughput is based on 22 servers with one dual port 40 Gb/sec card per server.
14 Exadata cells 168 disk drives 64 database server cores total 3 36-port Infiniband QDR (40Gb/sec) switches Enough for adding up to 7 more racks by just adding cables between the racks Cisco Ethernet 48-port switch (admin) KVM
Archival Compression Best approach for ILM and data archival Use on complete tables or combine with OLTP compression using partitioning Minimal storage footprint Data is always online and always accessible No need to move data to tape or configure multiple disk tiers Run queries against historical data (without recovering from tape) Update historical data Supports schema evolution (add/drop columns, indexes, etc.) Benefits any application with data retention requirements
Compression ratios based on Hybrid Columnar Compression “Query Default” and “Archive High”
Note that Enterprise Storage Arrays now support flash disks but there are no reported IOPs numbers from any vendor for their storage array using flash. The I/O performance numbers shown here are measured at the database level, not pure storage statistics that cannot be achieved in practice. Some vendors quote component level performance numbers that cannot be achieved in a complete systems due to bottlenecks at other parts of the system. Also, remember that this is a full system including servers, storage, and networking, not a pure storage device when comparing to other products. I/O rates are
Why is Oracle Faster DB Processing in Storage Smart Flash Cache Faster Interconnect (40Gb/sec) More Disks Faster Disks (15K RPM)
TPC-H 1TB, 11gR1 on Superdome (04/29/09) 64 cores, 768 disks (146GB 15K RPM) In-memory execution algorithms cache partitions in memory on different DB nodes Parallel servers (aka PQ Slaves) are then executed on the corresponding nodes
Because ASM is a volume management and file system component within the database it is designed to provide a file management layer optimized for the database. ASM optimizes performance by striping and optionally mirroring files across all the disks under its management. Additionally, ASM provides the ability to alter the storage configuration by adding or removing disks under its management without requiring the database down to be taken down. Finally, ASM is cluster-aware supporting RAC as well as multiple databases under a single ASM domain.
A Cell Disk is the virtual representation of the physical disk, minus the System Area LUN (if present), and is one of the key disk objects the administrator manages within an Exadata cell. A Cell Disk is represented by a single LUN, which is created and managed automatically by the Exadata software when the physical disk is discovered. On the first two disks, approximately 13GB of space is used for the system area. On the the other 10 disks, the system area is approximately 50MB. Cell Disks can be further virtualized into one or more Grid Disks. Grid Disks are the disk entity assigned to ASM, as ASM disks, to manage on behalf of the database for user data. The simplest case is when a single Grid Disk takes up the entire Cell But it is also possible to partition a Cell Disk into multiple Grid Disk slices. Placing multiple Grid Disks on a Cell Disk allows the administrator to segregate the storage into pools with different performance or availability requirements. Grid Disk slices can be used to allocate “hot”, “warm” and “cold” regions of a Cell Disk, or to separate databases sharing Exadata disks. For example a Cell Disk could be partitioned such that one Grid Disk resides on the higher performing portion of the physical disk and is configured to be triple mirrored, while a second Grid Disk resides on the lower performing portion of the disk and is used for archive or backup data, without any mirroring. Using ASM, you create Diskgroups from the Grid Disks and from that point on the Exadata Storage is transparent to the rest of the database and applications.
Exadata is able to extend the benefits of IDP to multiple grid disks on a single physical disk. The Grid disks are optionally split and interleaved such that frequently accessed data on all the grid disks are on the higher performing portions of the outer tracks. This ensures that all the applications benefit from the higher performance of the outer tracks of the physical disks.
For each disk group, ASM automatically creates a failure group for each Exadata Storage Server, containing the Grid Disks that belong to that server. ASM then mirrors the data such that the mirror copies are on a different failure group and hence a different Exadata Storage Server. That way, ASM is able to protect the database from disk failure and the failure of an Exadata Storage Server.
An Exadata administrator can create a resource plan that specifies how I/O requests should be prioritized. This is accomplished by putting the different types of work into service groupings called Consumer Groups. Consumer groups can be defined by a number of attributes including the username, client program name, function, or length of time the query has been running. Once these consumer groups are defined, the user can set a hierarchy of which consumer group gets precedence in I/O resources and how much of the I/O resource is given to each consumer group. This hierarchy determining I/O resource prioritization can be applied simultaneously to both intra-database operations (i.e. operations occurring within a database) and inter-database operations (i.e. operations occurring among various databases). In data warehousing, or mixed workload environments, you may want to ensure different users and tasks within a database are allocated the correct relative amount of I/O resources. For example you may want to allocate 50% of I/O resources to interactive users on the system, 30% of I/O resources to batch reporting jobs, and 20% of the I/O resources to the ETL jobs. This is simple to enforce using the DBRM and I/O resource management capabilities of Exadata storage. When Exadata storage is shared between multiple databases you can also prioritize the I/O resources allocated to each database, preventing one database from monopolizing disk resources and bandwidth to ensure user defined SLAs are met. For example you may have two databases sharing Exadata storage Assume that the business objectives dictate that database A should receive 33% of the total I/O resources available and that database B should receive 67% of the total I/O of resources. To ensure the different users and tasks within each database are allocated the correct relative amount of I/O resources various consumer groups are defined. For database A, 60% of the I/O resources are reserved for interactive marketing and 40% of the I/O resources are allocated for batch marketing activities. For database B. assume that 30% of the resources are allocated for interactive sales activities and 70% of the I/O resources are allocated fo the batch sales activities. These consumer group allocations are relative to the total I/O resources allocated to each database.
Instance Caging is very useful for consolidation. We want to support the consolidation of a large number of databases onto a grid, but make sure they share the server resources effectively. In the past, Resource Manager only worked inside a single database instance, but now it works between instances. No one database can usurp the resources of the entire server. This makes managing a consolidated environment much more easily. Can be dynamically set, with some limitations.
Active Data Guard sends copies of the redo log files to a remote database that applies them continuously. With Active Data Guard, the remote database can be either a physical or a logical standby database. New in 11gR2 is the ability of Active Data Guard to bi-directionally recover from block corruption.
Exadata also has been integrated with the Oracle Enterprise Manager (EM) Grid Control to easily monitor the Exadata environment. By installing an Exadata plug-in to the existing EM system, statistics and activity on the Exadata Storage Server can be monitored and events and alerts can be sent to the administrator. The advantages of integrating the EM system with Exadata include: Monitoring Oracle Exadata storage Gathering storage configuration and performance information Raising alerts and warnings based on thresholds set Providing rich out-of-box metrics and reports based on historical data All the functions users have come to expect from the Oracle Enterprise Manager work along with Exadata. By using the EM interface, users can easily manage the Exadata environment along with other Oracle database environments traditionally used with the Enterprise Manager. DBAs can use the familiar EM interface to view reports to determine the health of the Exadata system, and manage the configurations of the Exadata storage. Exadata Storage Servers provide a comprehensive Command Line Interface (CLI) to configure, monitor, and administer the server. In addition, a distributed version of the CLI utility is provided so that commands can be sent to multiple servers to ease the management of multiple servers. Each Exadata Storage Server has ILOM functionality to perform remote hardware administration tasks, like power cycling the servers.
Exadata also has been integrated with the Oracle Enterprise Manager (EM) Grid Control to easily monitor the Exadata environment. By installing an Exadata plug-in to the existing EM system, statistics and activity on the Exadata Storage Server can be monitored and events and alerts can be sent to the administrator. The advantages of integrating the EM system with Exadata include: Monitoring Oracle Exadata storage Gathering storage configuration and performance information Raising alerts and warnings based on thresholds set Providing rich out-of-box metrics and reports based on historical data All the functions users have come to expect from the Oracle Enterprise Manager work along with Exadata. By using the EM interface, users can easily manage the Exadata environment along with other Oracle database environments traditionally used with the Enterprise Manager. DBAs can use the familiar EM interface to view reports to determine the health of the Exadata system, and manage the configurations of the Exadata storage. Exadata Storage Servers provide a comprehensive Command Line Interface (CLI) to configure, monitor, and administer the server. In addition, a distributed version of the CLI utility is provided so that commands can be sent to multiple servers to ease the management of multiple servers. Each Exadata Storage Server has ILOM functionality to perform remote hardware administration tasks, like power cycling the servers.