This is a Safe Harbor Front slide, one of two Safe Harbor Statement slides included in this template.
One of the Safe Harbor slides must be used if your presentation covers material affected by Oracle’s Revenue Recognition Policy
To learn more about this policy, e-mail: Revrec-americasiebc_us@oracle.com
For internal communication, Safe Harbor Statements are not required. However, there is an applicable disclaimer (Exhibit E) that should be used, found in the Oracle Revenue Recognition Policy for Future Product Communications. Copy and paste this link into a web browser, to find out more information.
http://my.oracle.com/site/fin/gfo/GlobalProcesses/cnt452504.pdf
For all external communications such as press release, roadmaps, PowerPoint presentations, Safe Harbor Statements are required. You can refer to the link mentioned above to find out additional information/disclaimers required depending on your audience.
See SPARC M7 Processor details in the Backup section.
SMP Glueless scalablitity
Link level dynamic congestion avoidance
If a data path between a pair of processors is congested, it can route around through a 3rd party process based on destination queue utilization
RAS
Message retry: If there is a error on a transmission we will retry it in HW
Link retrain: IF link is going bad we will attempt to retrain it in H/W
Lane failover: If a link cannot be re-trained we will failover the link and continue to operate between w/o SW intervention
Remap target space = the portion of the 15 DIMMs into which the to be retired DIMM is remapped.
DIMM Sparing assumes the underlying HW fault is causing correctible errors, not uncorrectable errors.
After remap, the remaining 15 DIMMs continue to have all of their error protection features intact.
There is no increase in the exposure to uncorrectable errors.
Oracle offers a full portfolio of virtualization solutions to address your needs. SPARC is the leading platform to have the hard-partitioning capability that provides the physical isolation needed to run independent operating systems. Many customers have already used Oracle Solaris Containers for application isolation.
Oracle VM Server for SPARC provides another important feature with OS isolation. This gives you the flexibility to deploy multiple operating systems simultaneously on a single T-Series server with finer granularity for computing resources. For SPARC T-Series processors, the natural level of granularity is an execution thread, not a time-sliced microsecond of execution resources. Each CPU thread can be treated as an independent virtual processor. The scheduler is built into the CPU, without the extra overhead for scheduling in the hypervisor. What you get is a lower- overhead and higher-performance virtualization solution.
Your organizations can couple Oracle Solaris Containers and Oracle VM Server for SPARC with the breakthrough space and energy savings afforded by Oracle’s SPARC T-Series servers to deliver a more agile, responsive, and low-cost environment.
The T7-4 server Processor Module has two separate power domains
One half can be powered off independently from the other
Limits the impact of a fault to a single processor (CPU Module, CM) and its memory
If customers are familiar with the SPARC T4 line of products, then the actual SPARC T5 systems should seem very familiar. The main change between the 2 generations is the processor and motherboard.
Details on the comparison
Oracle:
SPARC T7-1One 4.13 GHz 32-core SPARC M7 (32 cores), 256 GB memory, 2x600 GB HDD
Oracle Solaris,
Oracle VM Server for SPARC
3 years of Oracle Premium Support For Systems
SPARC T7-22 x 4.13 GHz 32-core SPARC M7 (64 cores), 512 GB memory, 2x600 GB HDD
Oracle Solaris,
Oracle VM Server for SPARC
3 years of Oracle Premium Support For Systems
SPARC M7-88 x 4.13 GHz 32-core SPARC M7 (256 cores), 2 TB memory
Oracle Solaris,
Oracle VM Server for SPARC
3 years of Oracle Premium Support For Systems
IBM:
Power S824, 2 x 3.52 GHz 12-core POWER8 Processor Card (24 cores, four P8 chips)256 GB memory, 2x600 GB HDD
IBM AIX Standard Edition, 3 years of All-Severity 24x7 SWMA
PowerVM Enterprise Edition, 3 years of All-Severity 24x7 SWMA
3 years of IBM OnSite Repair 24x7 4-hour
Power E850, 2 x 3.02 GHz 12-core POWER8 Processor Card (48 cores, eight P8 chips)512 GB memory, 2x600 GB HDD
IBM AIX Standard Edition, 3 years of All-Severity 24x7 SWMA
PowerVM Enterprise Edition, 3 years of All-Severity 24x7 SWMA
3 years of IBM OnSite Repair 24x7 4-hour
Power E870, 2 x 4.19 GHz 40-core POWER8 Processor Card (80 cores, eight P8 chips)1 TB memory
IBM AIX Standard Edition, 3 years of All-Severity 24x7 SWMA
PowerVM Enterprise Edition, 3 years of All-Severity 24x7 SWMA
3 years of IBM OnSite Repair 24x7 4-hour
Power E880, 4 x 4.02 GHz 48-core POWER8 Processor Card (192 cores, 16 P8 chips)2 TB memory
IBM AIX Standard Edition, 3 years of All-Severity 24x7 SWMA
PowerVM Enterprise Edition, 3 years of All-Severity 24x7 SWMA
3 years of IBM OnSite Repair 24x7 4-hour
The NVMe PCIe switch card is a factory configured option only. The communication cables from the switch to the drive bays cannot be installed afterwards.
A single on-board SAS3 HBA supports all 8 disk drive bays. The NVMe PCIe switch enables 4 drive bays to support NVMe SSD. SAS3 and NVMe drives can be mixed. Any drive bays that are NVMe enabled will continue to support SAS3.
NVMe SSDs are hot-pluggable, not hot-swappable, they require preparation before they can be safely removed.
SPARC T7-2 includes always two SPARC M7 processors. Half and fully populated memory configurations are supported.
The disk drive bay is split (4+2) across the two on-board SAS HBAs.
One or two NVMe PCIe switch cards can be used to enable NVMe support in 4 drive bays. See subsequent slides for more details.
SAS3 and NVMe drives can be mixed. Any drive bays that are NVMe enabled will continue to support SAS3.
The NVMe PCIe switch card is a factory configured option only. The communication cables from the switch to the drive bays cannot be installed afterwards.
NVMe SSDs are hot-pluggable, not hot-swappable, they require preparation before they can be safely removed.
This slide can also be used as a Q and A slide
SPARC T7-4 can be ordered with two or four SPARC M7 processors (one or two processor modules, each with two processor and 32 DIMM slots). Half and fully populated memory configurations are supported, 16 or 32 DIMMs per processor module (32 or 64 DIMMs per system).
All DIMMs must be of same kind in the initial order. All DIMMs on a single processor module must always be of same kind. DIMM density can be different between the two processor modules (but not as a factory installed configuration).
The disk drive bay is split (4+4) across the two on-board SAS HBAs. The disk drive bay is split similarly (4+4) across the two optional NVMe PCIe switches.
NVMe SSDs are hot-pluggable, not swappable, they require preparation before they can be safely removed.
Each of the two NVMe PCIe switch cards enable NVMe support in 4 drive bays. The two cards must be ordered together.
See subsequent slides for more details. SAS3 and NVMe drives can be mixed. Any drive bays that are NVMe enabled
will continue to support SAS3.
The NVMe PCIe switch cards can be ordered for the SPARC T7-4 either as a factory configured option, or added later at customer site.
SPARC M7-8 is offered in two different configurations. One variety of the SPARC M7-8 cannot be converted to the other type after being shipped from the factory.
SPARC M7-8 with one physical domain, or with two physical domains. Each configuration is ordered with its specific base package.
SPARC M7-8 with one physical domain wired to have permanently one physical domain. Likewise, the SPARC M7-8 with two physical domains is permanently configured to have two static physical domains. Both system are based on hardwired glueless system interconnect that cannot be reconfigured after factory assembly.
Note that the SPARC M7-16 server is different, and can be reconfigured via software functions to consist of 1, 2, 3 or 4 physical domains.
The SPARC M7-8 server is recommended to be factory installed in the Sun Rack II 1242. Up to 3 SPARC M7-8 servers can be mounted in a single rack. Only one system can be factory installed into the rack, additional systems must be rack-mounted on-site.
Two power distribution units, PDUs are required to be included when ordering a factory rack-mounted SPARC M7-8 system. Each PDU have three 3-phase power cables. A 3RU space must be allocated for power cable routing either at the top or the bottom of the rack. Additional space may be required for other cabling (networking, IO etc.). The server enclosure (aka CMIOU chassis) itself consumes 10 RUs.
(1 or 2) M7 compute node chassis configured as either– 1 PDOM (M7-8) or 2 PDOM (M7-4)• 2 (M7-4 only), 4 or 8 CMIOUs• 16 (M7-4 only)), 64 (half pop) or 128 (full pop) 32GB DDR4 DIMMs (.5, 2TB or 4TB total)• 2 (M7-4 only), 4 (half pop) or 8 (full pop) 10 GBE (Niantic, including transceivers and optical cables)• 2 (M7-4 only) 4 (half pop) or 8 (full pop) CX3 IB HCA• 1 or 2 Powerville cards– A Single M7 rack config must be a M7-4 (2x4) configuration– A Dual M7 rack config can be either two M7-8’s (1x8) or two M7-4’s (2x4) but both must be the same• (3 to 11) Exadata X5-2L storage servers as either:a) High Capacity• (2) E5-2630 2.4Ghz CPU• (4) 16GB DDR4,2133 DIMMS• (4) 8GB DDR4,2133 DIMMS• (4) 1.6TB Aura3.0 NVME FLASH Cards (F160)• (12) 4TB 3.5",SAS2 7200rpm drives• (1) DUAL PORT QDR CX3 HCA• (1) SAS3 RAID-INT HBA w/ SUPERCAP (Aspen)• (1) 8 GB MLC USB FLASH DRIVE (internal)b) Extreme Flash• (2) E5-2630 2.4Ghz CPU• (8) 8GB DDR4,2133 DIMMS• (8) 1.6TB NVME SSD Drives• (1) DUAL PORT QDR CX3 HCA• (4) NVME Switch Cards• (2) 8 GB MLC USB FLASH DRIVE (internal)4Oracle Supercluster M7-8 Rack Component List• (1) ZS3-ES Storage Array– (2) X3-2 storage heads configured with:• (2) E5-2658 (8-core, 2.1Ghz CPU)• (16) 16G 1600 DIMMS (256G/Node)• (1) Thebe-Ext HBA• (1) Cluster interconnect card• (1) IB HCA• (2) 1.2TB HDD’s in each storage node• (2) 1.6TB read optimized SSD’s in each storage node– (1) DE2-24C Storage Tray configured with:• 20x4TB SAS drives in the storage tray• 4x200GB write optimized SSD’s in the storage tray• 80TB clustered• (2 or 3) NM2-36P switches (spine is optional)• (1) Cisco 4948• PDUs