Oracle’s strategy is to deliver a complete suite of storage products, engineered to run Oracle software faster and more efficient than traditional storage products. This means that customers can expect Oracle’s application, database, middleware, and operating system software to operate best and deliver their greatest customer value when running on Oracle Storage systems.
Oracle’s storage portfolio delivers performance, efficiency and scalability unparalleled in the market – and will continue to innovate and extend its capabilities to deliver best of breed products solving many of the customer’s critical data requirements.
While Oracle deliver’s best of breed products for all applications – Oracle storage solutions run Oracle Software faster and more efficiently than other storage products.
Pillar Axiom has 3 broad “value” differentiators that drive both lower TCO and faster ROI.
It is very easy to use, yet it is enterprise class. Data and storage services can be changed dynamically, as the IT environment changes and application importance changes. QoS manages prioritization based on application profiles even when multiple concurring requestors hit the system for resources. You can schedule QoS level changes when you know certain processes are running such as end of quarter financials or even nightly backups - and do so permanently or temporarily.
Its scalable and elastic. You don’t have to trade off capacity for performance or v.v. You can linearly scale capacity and performance. The Axiom is a natural choice for data center consolidation projects because you can scale for new workloads and you can bring new technology into a system (like SSD) WITHOUT disrupting operations. This means longer system useful life and yet lower costs.
Industry leading efficiency. This claim is validated from our own customers who routinely see 80% utilization of their system capacity – WITHOUT performance degradation. Twice the industry average according to Gartner Dataquest. This is unique.
One of the biggest Axiom values is you get to use a large percentage of the capacity you pay for as a user. 80% without performance degradation. This is twice the common competitor’s 35-40% utilization per Gartner Dataquest. Higher utilization means lower TCO and faster ROI when you load up a system with multiple workloads. Axiom customer call home logs validate this utilization claim.
Here are 3 examples of customers that made the switch to Axiom from EMC, IBM, and HDS.
EMIS (Egton Medical Information Systems) in the UK is a managed service provider of more than 40M patient records in the UK National Health Service. They have 9 Axioms and run their entire service using Pillar Axiom. Axiom won over EMC.
Rona is Canada’s largest home improvement retailer. Previously an “all blue” IBM shop, Rona chose Pillar to run house of their data for inventory, dispatching, and CRM systems. Both Pillar Axiom NAS and SAN form a unified solution for Rona.
Blackrock is one of the world’s premier asset management firms. They run all of their quant and fundamentals analysis on the data housed on the Axiom, leveraging QoS. Axiom was chosen over HDS 99xx.
This slide describes the hardware components of the Axiom architecture.
Be sure to point out that the competition does not do any of this. They have “two controllers do it all” architectures (except 3Par. They can scale to 8). This is the slide to start focusing on the large amounts of cache and the distributed RAID
Be sure to point out that the Axiom is 5-9’s architected. There is no single point of failure. Every other vendor (ex. EMC, HP, NetApp, Dell Equallogic) makes the 5-9’s claim. We have the same level of redundancy, if not better, than they do.
The Pillar Axiom is highly scalable and modular. Tried and tested, it is now in its 5th generation of software so quality has been honed through the development and delivery process.
Its modularity is built on 3 basic hardware components – all within a single “model” – you don’t need to buy another model as you scale hardware.
You can start with as small a system as 1 Slammer, 1 Brick, and 1 pilot. But you can scale to as much as 4 Slammers (8 active-active control units) and 64 bricks (832 drives or 1.6PB of capacity) with 1 pilot. You can intermix SATA, FC or SSD storage classes.
The Axiom is only one model that scales from 12TB all the way to 1.6PB. You don’t need to do forklift upgrades to new models since the system is built on a common modularity, a common software base, and a common storage pool. Axiom technology upgrades can be done non-disruptively in almost all cases – i.e. you can add capacity or upgrade technology without losing access to your data.
The “business value” differentiators are driven by technology differentiators built into the Pillar Axiom.
Pillar’s QoS (Quality of Service) is patented. It prioritizes application IO fulfillment and provides unique contention management that enables consolidation and co-existence of multiple application data types and workloads associated with Axiom storage. This translates to big benefits. QoS breaks the age-old FIFO paradigm - finally. Storage I/O resources are assigned according to the associated application’s business value – not “first come, first served”. Axiom’s QoS is a system-wide approach from I/O request to data placement striped across drives, RAID levels, and even down to the disk spindle level (inner band versus outer band on HDDs).
Modular “modern” architecture. The Axiom is modular, not monolithic. Because it is modular, you can scale without having to do “fork lift” upgrades that are highly disruptive. Capacity scaling from 12TB to 1.6PB. You can grow and rebalance your Axiom system storage pool based on your changing environment. Provides flexibility and elasticity. Unique.
Distributed RAID. Linear scaling of performance and capacity is achieved via unique distribution of RAID controllers – 2 in every brick – up to 128 in a system. Extreme performance even when drive rebuilds take place, since the system is architected to provide RAID controller rebuild power on a ratio of 1:6 drives. Other competitors use an archaic “2 controller” architecture across their entire system. When rebuilds occur, this old architecture can topple a competitor system. Not so with Axiom.
This is an example of how QoS should work in practice. In this example of a journey you might take on a plane, the more money you are willing to pay – because you deserve it – the more “service” you are going to get. It’s not just where you sit on the plane – which can be like using SSD (first), FC (business) or SATA (coach), but the other privileges you get in terms of how quickly you board and exit, seat comfort, number of attendants etc.
Similar to the QoS example seen before, the Axiom storage system provides a QoS model that not only allows you to optimally place data on drives, but allows you to provide the adequate resources to service the critical business application data.
The animation on the left shows how other vendors typically process I/O. Note that the important application (RED) IO will have to wait for other applications to receive their IO.
The Axiom prioritizes IOs via multiple queues with 5 priority levels. This means important applications receive IO service at higher rates since QoS allows higher priority IOs to be processed ahead of others. The Axiom will reduce IOs to lower priority applications so higher priority applications will get up to 6x more IOs executed over time.
With QoS, you are allowed to have multiple service levels – just like the plane. If everyone was equal - like a low budget airlines – your service level (and your performance) suffers – but you expect it.
QoS enables multi-tenancy or the ability to provide multiple service levels. So, you can have a Tier 1 OLTP application, a business critical Exchange environment and a tertiary archive application all on the same storage system – with distinct QoS or performance levels.
The ability to store multiple application data on the same storage system (without storage silos) allows you to increase the utilization of the system – get all the seats full. This allows you to reduce you CapEx (less storage to buy) and OpEx (less storage to manage – administer, power, cooling) in the process.
The Pillar Axiom product provides pre-define application workload profiles that simplify real-world storage configuration and supports best-practice; different apps have varying data performance characteristics and profiles helps customers understand how various storage components should be setup to best support their application needs. i.e. different read/write intensive, large vs. small blocks, can change on the fly without changing storage infrastructure.
An example of the application profile – where an application can have different LUNs for different type of data within an application, but need to work in concert.
Provisioning is as simple as launching the UI, selecting the profiles and confirming your selection.
What if you pick the wrong settings
Axiom’s patented QoS can reset QoS and performance levels without interrupting application data access.
As shown on the above chart, the QoS priority can be reset within the same storage class to promote or demote the importance of the LUN or FS. This can be done on a temporary or permanent basis.
If you choose to do a temporary QoS change for a short period of time, say for instance to complete a backup job in a particular time window, the system increases the CPU and Cache usage for that time period. Later, it can be throttled back to its original performance level. No movement of data occurs
In a permanent QoS migration, maybe because the LUN or FS in question is under performing, the extents or data segments of the LUN/FS are spread across more drives to boost performance in additional to the performance boost from the CPU+Cache.
The IO and access biases can also be changed depending on the variation of the data patterns
Example of a “Permanent” change with physical data movement. Note that RAID layouts can also change when data is promoted or demoted across tiers based on I/O bias settings.
In this case, you are making a permanent QoS movement to a different storage class to increase/decrease the performance. In this movement, you can change the priority, which changes the CPU+Cache allocation, the stripe width, the RAID type etc.
This is essentially, predictive data tiering across different storage classes – all done without IO interruptions to the application.
Axiom’s Storage Domains enables you to isolate data to the physical brick level. Since you can have up to 64 bricks in a system you can have up to 64 domains. Competitors co-mingle data and say they can virtually secure it down; in many use cases virtual is unacceptable for regulatory compliance - it needs to be physically isolated. That’s what Axiom’s Storage Domains can do – physically isolate data to the drive level.
This is a BIG differentiator in selling to Managed Service Providers and prospects building private cloud infrastructures.
There are 4 basic use cases you can highlight depending on the type of customer you are selling to:
We think this is the one most differentiated. You can refresh new drive technology in an Axiom without disrupting operations. Create a new domain with new drives (e.g. SSD); Move data from old technology (e.g. 300GB FC) domain. Remove old technology bricks. You have just upgraded without disruption.
Security purposes – especially good for Public Sector – you can isolate a workload and not have any data co-mingled with other data.
For Utility Computing/Grid or chargeback scenarios, you can lock down a user group or department to a drive domain to monitor and manage and possibly chargeback for usage or services
In Unified (NAS and SAN) configurations, you can isolate protocols since the data structures associated with NAS versus SAN can be quite different.
This is a feature that ships with the system but if used is chargeable. Consult your price list.
You can start really small (1 Slammer and a few Bricks) and scale the system as the need for capacity and performance grows with your business.
Unlike most storage systems out there, the performance and capacity scale is linear as you add these modular components.
You increased bandwidth, IOPS and ports by adding Slammers.
You increase IOPS and capacity by adding Bricks
This slide really shows the differentiation between us and all the “Two controllers do it all” companies. This is the slide where the customer’s light bulb goes off. They really see the difference in the basic architectures between us and everyone else. Any customer who is looking to scale beyond a few bricks should see the advantages of our architecture.
Point out:
-- we put two RAID controllers in each bick and have a 6:1 drive to controller ratio no matter how many drives we have. The competition will scale to hundreds of drives per controller. Some as many as 500 drives per RAID controller
-- use this slide to point out our drive rebuild scheme. It’s clear to see how we isolate the rebuild within a brick while the competition has to use 50% of their controllers to rebuild a drive
-- also point out that if a SATA drive fails, since the rebuild is isolated within that SATA brick, the performance of the LUNs residing on FC drives is not impacted. With the competition, a SATA failure is handled by one of the controllers (50% of the controllers). That controller probably owns 50% of the LUNs in that system which means that for the hours or days that the rebuild takes, all of the LUNs associated with that controller have degraded performance. It doesn’t matter if they’re FC or SATA LUNs
For all Oracle databases, HCC should be leveraged along with Axiom’s Application Profiles which are pre-built management profiles for Oracle applications.
OVM 2.0 is fully supported with a Storage Connect plug-in to allow you to manage Axioms and virtual server infrastructure from the same user interface.
Oracle Enterprise Manager can be used to manage Axioms using the plug-in developed by Axiom.
An example of how this vision translates to value is seen in Oracle’s Hybrid Columnar Compression which originated in the Exadata product and is now available in both ZFSSA and Pillar Axiom. Early benchmarks are projecting 3-5x faster plus clear efficiency gains versus primary competitors, NTAP and EMC.
HCC only works with Oracle storage, a distinct differentiator and a clear example of the value of integration.
Simple multi-site, multi-tier scenario shown, without shared QFS clients
Local site SAM QFS MDS creates a local disk archive (1) and tape archive (2) copy and uses a file transfer feature to create remote disk archive (3) and tape (4) archive copies
This basic architecture can scale to thousands of SAN clients, hundreds of file systems , billions of files, PB of disk cache, and most importantly, unlimited archive capacity
Next, we’ll take a look at core SAM functionality
---------------------------------------other notes if needed---------------------------------
Any 3rd party disk that is supported in Solaris can be used as a disk cache or disk archive
Similarly, many 3rd party tape drives and optical disk drives have been tested and qualified to serve as archive targets behind SAM
Many customers use 3rd party tools to move data (i.e., rsync - only replicates changed blocks) and parallel FTP to replicate disk cache across WAN, where it is ingested into remote site QFS file system like a new application workload
(DR) Remote Site (not SAM Remote)
QFS servers talking to each other using SAM-RFTD feature – Transfers only TAR file
Export SAMFSDMP for DR / full QFS file system restore
Requires remote file system only to enable transfer (for daemon to start up) and cache
SAM running on remote server to archive to tape
Works on 2 MDS and creates pipe between them
NFS across network using parallel FTP to remote server (multithreading) Over IP wire performance issues? Have throttling controls.
Local site can only see one copy at remote site
Remote site makes own copy to tape archive (local site doesn’t know about remote tape archive)