Having tested and validated Oracle Grid reference configurations, Dell Global Solutions engineers share their insight of how best to set up and implement this computing resource to enable networked computers to share on-demand resource pools.
The Codex of Business Writing Software for Real-World Solutions 2.pptx
Building an Oracle Grid with Oracle VM on Dell Blade Servers and EqualLogic iSCSI Storage
1. Building an Oracle Grid with Oracle
VM on Dell Blade Servers and
EqualLogic iSCSI storage
Kai Yu David Mar
Sr. System Engineer Consultant Engineering Manager
Dell Global Solutions Engineering Dell Global Solutions Engineering
2. ABOUT THE AUTHORS
Kai Yu is a Senior System Engineer and works within the Dell Oracle
Solutions Lab. He has over 14 years Oracle DBA and Solutions
Engineering experience. His specialties include Oracle RAC, Oracle EBS
and OVM. He is an avid author of various Oracle technology articles and
frequent presenter at Oracle Open World and Collaborate. He is also the
Oracle RAC SIG US Event Chair.
David Mar is an Engineering Manager for the Dell | Oracle Solutions group.
He directs a team across the globe to create Oracle RAC reference
configurations and build best practices based upon tested and validated
configurations. He has worked for Dell since 2000 and has a MS in
Computer Science and BS in Computer Engineering from Texas A&M
University.
2
6. Compute Virtualization Roles / Components
Virtual Grid Physical Grid
DB / DB /
RAC DB app app DB 1
OS OS OS OS OS
DB 2
VM Server Pool
DB 3
OS OS OS OS OS
• Oracle VM • Oracle RAC
• Partitioning single server • Scale-up through
• Consolidation scale-out of hw
• DB Scalability only
7. OVM | VIRTUALIZATION’S ROLE
Testing Oracle RAC infrastructure with minimal
hardware
Testing Oracle RAC pre-production areas
App Servers testing and production
Good to maximize server utilization rates
Consolidation for low utilization servers
Partitioning one application from another
8. OVERVIEW OF EQUALLOGIC
Volume 1
Volume 2
Volume 3
Storage Pool
4 Gigabit network ports per controller
2 GB cache per controller
Intelligent automated management tools
Self Managing Array
Linear Scalability
Data Protection Software
9. STORAGE LAYER ON EQUALLOGIC
Volume 1
Volume 2
Volume 3
Storage Pool
EqualLogic and ASM work well together
http://www.dell.com/oracle
Performance Implications of running Oracle ASM with Dell EqualLogicTM PS series storage
EqualLogic ASM Best Practices for Oracle
http://www.dell.com/oracle
Best Practices for Deployment of Oracle® ASM with DellTM EqualLogicTM PS iSCSI Storage System
Peer Storage architecture
Setup Quickly
Linear Performance Improvements
10. SERVER LAYER ON BLADES
Oracle
RAC
Servers
Front View Back View
Energy efficient PowerEge M100e enclosure
11th generation blade servers available
Intel “Nehalem” processors
Intel QuickPath memory technology
Three redundant fabrics
16 half height blade servers
14. 3 – MANAGEMENT
• Oracle Grid Control to manage both
physical Grid and virtual Grid
• Consider HA for production management
• Manage externally out of the Blade Cluster,
to avoid single point of failure
• Dell Servers enable health monitoring from
EM
17. GRID IMPLEMENTATION
Implementation Tasks
– Configure Grid Control Management Infrastructure
– Configure EqualLogic Shared storage
– Configure Physical Grid
– Configure Virtual Grid
Grid Control Management Infrastructure Configuration
– OEL 4.7 64 bit was installed for Grid control server
– Oracle Enterprise Manager Grid control 10.2.0.3 installed:
with OMS server and repository database and Grid control agent
– Grid control 10.2.0.5 with patch 3731593_10205_x84_64
Apply to OMS server, repository, and the agent
– Enable Virtual Management Pack with patch 8244731 to OMS,
Install TightVnc Java Viewer to OMS server
– Restart OMS server
17
18. GRID IMPLEMENTATION: EQUALLOGIC
STORAGE
EqualLogic shared storage configuration
– Storage volumes for the physical Grid
Volume Size Raid Used for OS Mapping
blade_crs 2GB 10 OCR/Votingdisk Yes
blade_data1 100GB 10 Data for DB1 ASM diskgroup1
blade_data2 100GB 10 Data for DB2 ASM diskgroup2
blade_data3 150GB 10 Data for DB3 ASM diskgroup3
blade_data5 150GB 10 Data for DB4 ASM diskgroup5
– Storage volumes for the virtual Grid
Volume Size Raid Used for OS Mapping
blade_data4 400GB 10 VM Repositories /OVS
/OVS/9A87460A7EDE43
blade_data6 500GB 10 VM Repositories
EE92201B8B7989DBA6
vmcor1-5 1GB each 10 OCR/vogtingdisk 2XOCRs/3Xvotingdisks
vmracdb1 50GB 10 Data for RAC DB ASM diskgroup1
vmracdb2 50GB 10 Data for RAC DB ASM diskgroup2
18
20. GRID IMPLEMENTATION: PHYSICAL
GRID
Physical Grid Configuration : 8-node 11g RAC
– Network configuration
Network Interface IO modules Connections IP address
eth0 A1 Public Network 155.16.9.71-78
eth2 B1 iSCSI connection 10.16.7.241-255(odd)
eth3 B2 iSCSI connection 10.16.7.240-254(odd)
eth4,eth5 C1,C2 Bonded to bond0 192.168.9.71-78
VIP Virtual IP for 11g RAC 155.16.9.171-178
– iSCSI storage connections
• Open-iSCSI administration utility to configure host access to storage
• volumes: eth2-iface on eth2, eth3-iface on eth3
• Linux Device Mapper to establish mutipath devices to storage alias:
/etc/multipath:
multipaths {
multipath { wwid 36090a028e093dc7c6099140639aae1c7
alias ocr-crs
}
– Check multipath devices in /dev/mapper: /dev/mapper/ocr-crsp1
20
21. GRID IMPLEMENTATION: PHYSICAL
GRID
– Use block devices for 11g Clusterware and RAC databases
• OEL 5 no longer supports raw devices,
• Use block devices for 11g clusterware/RAC databases
• /etc/rc.local to set proper ownerships and permissions
– Configure 11g RAC database Infrastructure
• 11g RAC clusterware configuration:
private interconnect : bond0: 192.168.9.71-78
2 x OCRs and 3 X votingdisks on mutipath devices
• 11g ASM configuration:
ORA_ASM_HOME =/opt/oracle/product/11.1.0
ASM instances provide the storage virtualization
• 11g RAC software to provide the database service
ORA_HOME=/opt/oracle/product/11.1.0/db_1
• Grid control agent 10.2.0.5 to connect to OMS server
– Consolidate multiple databases
21
22. GRID IMPLEMENTATION: PHYSICAL
GRID
• Sizing the DB requirement: CPU, IO and memory
• Determine storage needs and # of DB instances
• Determine which nodes to run the database
• Provision the storage volumes and create the ASM diskgroup
• User DBCA to create the database using the ASM diskgroups
• For some ERP applications, convert the installed DB to RAC
– Examples of Pre-created Databases
22
24. GRID IMPLEMENTATION: PHYSICAL
GRID
– Scale out the physical Grid Infrastructure
• Consolidate more databases to the Grid
• Increase capability of the existing databases
• Scale out the storage:
a. add additional Equallogic array to the storage group
b. expand existing or create new volumes
c. make new volume accessible to the servers
d. create ASM diskgroup or add partition to the existing
diskgroup or resize the partition of an diskgroup [1]
• Scale the Grid by adding servers: Use 11g RAC add node
procedure to add new node to the cluster and add the
• Expand the database to additional RAC nodes:
Use the adding instance procedure of enterprise manager to
add new database instance to the database [2]
• Dynamically move the database to less busy node:
User the adding instance and removing instance of enterprise
manager to add new database instance to the database [2]
24
25. GRID IMPLEMENTATION: VIRTUAL GRID
Implementation Tasks
– Virtual servers Installation
– Virtual server network and stroage configuration.
– Connect VM servers to the Grid Control
– Management VM infrastructure through Grid control
– Create guest VM using VM template
– Manage the resources of guest VMs.
Virtual servers Installation
– Prepare local disk and enable virtualization on BIOs
– Install Oracle VM server 2.1.2
– Change Dom0 memory : /boot/grub/menu.lst:
edit line: kernel /xen-64bit.gz dom0_mem=1024M
– Ensure VM agent working: #service ovs-agent status
25
26. ORACLE VIRTUALIZATION
INFRASTRUCTURE IMPLEMENTATION
Oracle VM server network configuration
– eth0 : public in dom0, presented to domU through xenbr0
– eth2, eth3: iSCSI connections
– eh4 and eth4 bonded to bond0 for the private interconnect
between VMs for 11g RAC configuration through xenbr1
– Secure OVM agent is running: service ovs-agent status
26
27. GRID IMPLEMENTATION: VIRTUAL GRID
dom0 IO modules connection IP domU
eth0 A1 Public Network 155.16.9.71-78 eth0
eth2,eth
B1, B2 iSCSI connections 10.16.7.238-235
3
eth4,eth
C1,C2 Bonded to bond0 192.168.9.82-85 eth1
5
– Customize the default Xen bridge configuration
a. Stop the default bridges”/etc/xen/scripts/network-bridges stop
b. Edit /etc/xen/xend-config.sxp:
replace line: (network-script network-bridges) with
(network-script network-bridges-dummy)
a. Edit /etc/xen/scripts/network-bridges-dummy : /bin/true
b. Create infcg-bond0, ifcfg-xenbr0, ifcfg-xenbr1 scripts in
/etc/sysconfig/network-scripts
27
28. GRID IMPLEMENTATION: VIRTUAL GRID
ifcfg-eth0: ifcfg-eth4 ifcfg-bond0 ifcfg-xenbr0 ifcfg-xenbr1
DEVICE=eth0 DEVICE=eth4 DEVICE=bond0 DEVICE=xenbr0 DEVICE=xenbr1
ONBOOT=yes TYPE=Ethernet ONBOOT=yes ONBOOT=yes ONBOOT=yes
BOOTPROTO=none MASTER=bond0 BOOTPROTO=none TYPE=Bridge TYPE=Bridge
BRIDGE=xenbr0 USERCTL=no BRIDGE=xenbr1 IPADDR=155.16.9.82 TYPE=Bridge
ONBOOT=yes NETMASK=255.255.0.0 NETMASK=255.255.255.0
restart network services: #service network restart
check Xen bridges: [root@kblade9 scripts]# brctl show
bridge name bridge id STP enabled interfaces
xenbr0 8000.002219d1ded0 no eth0
xenbr1 8000.002219d1ded2 no bond0
– Configure shared storages on dom0 for VM servers
• For OVM repositories : blade_data4: 400GB, Blade_data6_ 500GB
• For 11g RAC shared disks: OCRs, votingdisks, ASM diskgroups
• Configure iSCSI and multipath devices on dom0
Use iSCSI admin tool and Linux DM as the physical server
Check the multipath devices : ls /dev/mapper/*
/dev/mapper/blade-data6 /dev/mapper/mpath5 /dev/mapper/ovs_data4p1
/dev/mapper/blade-data6p1 /dev/mapper/vmocr-css1 /dev/mapper/vmocr-css1p1
/dev/mapper/vmracdb1 /dev/mapper/vmracdb1p1 /dev/mapper/ovs_data4
28
29. GRID IMPLEMENTATION: VIRTUAL GRID
Convert OVM repository to the shared disks: configure OCFS2
a) Edit /etc/ocfs2/clutser.conf b) service o2cb stop
C)
• Fcb
–
d) # mkfs.ocfs2 -b 4k -C 64k -L ovs /dev/mapper/ovs_data4p1
e) umount /OVS
f) edit /etc/fstab:
#/dev/sda3 /OVS ocfs2 defaults 1,0
/dev/mapper/ovs_data4p1 /OVS ocfs2 _netdev,datavolume,nointr 00
29
30. GRID IMPLEMENTATION: VIRTUAL GRID
g) mount ocfs2 partitions: mount -a -t ocfs2
h) create directories under /OVS: running_pool, seed_pool,sharedDisk
i) repeat steps a-h on all the VM servers
• Add additional volume to the OVS repositories
– Connect the VM servers to Enterprise Manager Grid Control
• Meet pre-requisites:
a. Oracle user with oinstall group
b. Oracle user’s ssh user equivalence between dom0 & OMS
30
31. GRID IMPLEMENTATION: VIRTUAL GRID
c. Create /OVS/provxy
d. Oracle user sudo privileges: visudo /etc/sudoers
Add line: oracle ALL=(ALL) NOPASSWD: ALL
Comment out line: Defaults requiretty
• Create VM server Pool: login to Grid control
31
33. ORACLE VIRTUALIZATION
INFRASTRUCTURE IMPLEMENTATION
Create Guest VMs using VM template
– Virtual Machine as node of Virtual Grid
– Methods of guest VMs: VM template; install media; PXE boot
– Import a VM template
• Download OVM_EL5U2_X86_64_11GRAC_PVM.gz to repository
• Discover VM template from Grid control Virtual Central
33
37. ORACLE VIRTUALIZATION
INFRASTRUCTURE IMPLEMENTATION
• Disks shown in VM as the virtual disk partitions:
• Attached the storage to to the guest VM
a) vm.cfg: disk = [''file:/OVS/sharedDisk/racdb.img,xvdc,w!',
b) vm.cfg: disk = ['‘phy: /dev/mapper/vmracdbp1, xvdc,w!',
• Exposed Xen bridge for virtual network interface
vm.cfg:
vif = ['bridge=xenbr0,mac=00:16:3E:11:8E:CE,type=netfront',
'bridge=xenbr1,mac=00:16:3E:50:63:25,type=netfront',]
xenbr0 - eth0 in the guest VM
xenbr1 eth1 in the guest VM
37
39. ORACLE VIRTUALIZATION
INFRASTRUCTURE IMPLEMENTATION
Consolidate enterprise applications on the Grid
– Applications and middleware on the virtual Grid
• Create a guest VM using Oracle OEL 5.2 template
• Deploy application on the guest VM
• Build the VM template of the VM
• Create new guest VMs based on the VM template
– Deploy database service on the physical Grid
• Provision adequate size of storage volume from SAN
• Make the volume accessible to all the physical Grid Nodes
• Create the ASM diskgroup
• Create database service on the ASM diskgroup
• Create application database schema on the database
• Establish the application database connections
– Deploy DEV/Test Application suite on the virtual Grid
• Multi-tier nodes are on the VMs
• Fast deployment based on templates
39
40. CASE STUDIES OF GRID HOSTING
APPLICATIONS
Grid Architecture to host applications
– Oracle Applications E-Business suites on RAC/VM
• Three Applications Tier nodes on three VMs
• Two Database Tier nodes on Two node RAC.
– Oracle E-Business Suite single node on Virtual Grid
• Both Applications tier and Database tier node on VMs
– Banner applications on physical/virtual Grid
• Two Applications Tier nodes on two VMs
• Two Database Tier nodes on Two node RAC
– Provisioning Oracle 11g RAC on Virtual Grid
• Two Database RAC nodes on two VMs
• 11g RAC provisioning by Grid control Provisioning Pack
11g RAC provisionging procedure
• Add additonal RAC node on new VM Grid control Provisioning
Pack Add node Procedure
40
41. CASE STUDIES OF GRID HOSTING
APPLICATIONS
Grid Architecture to host applications
41
42. CASE STUDIES OF GRID HOSTING
APPLICATIONS
Virtual Grid shown on Grid Control Virtual Central
42
43. SUMMARY
• Dell Grid POC Project: Pre-built Grid with physical and virtual grids
• Grid control as the unified management solution for the Grid
• Dell blades as the platform of the Grid: RAC node and VM servers
• EqualLogic provides the shared storage for the physical and virtual
Grid
• The Grid provides the infrastructure to consolidate the enterprise
applications as well as the RAC databases
• Acknowledge the support of Oracle engineers: Akanksha Sheoran,
Rajat Nigam, Daniel Dibbets, Kurt Hackel, Channabasappa Nekar,
Premjith Rayaroth, and Dell Engineer: Roger Lopez
• Related OpenWorld Presentations:
– ID#: S308185, Provisioning Oracle RAC in a Virtualized Environment,
Using Oracle Enterprise Manager, 10/11/09 13:00-14:00, Kai Yu & Rajat
Nigam
– ID#: S310132, Oracle E-Business Suite on Oracle RAC and Oracle VM:
Architecture and Implementation, 10/14/09 10:15 -11:15 Kai Yu and John
Tao
43
44. References:
1. Best Practices of Deployment of Oracle® ASM with Dell EqualLogic PS
iSCSI Storage System, Dell White Paper
2 . Oracle Press Release: Oracle® Enterprise Manager Extends Management
to Oracle VM Server Virtualization
3. Oracle® Enterprise Manager Concepts 10g Release 5 (10.2.0.5)
Part Number B31949-10
4. Oracle Enterprise Manager Grid Control ReadMe for Linux x86-64
10g Release 5 (10.2.0.5) April 2009
5. How to Enable Oracle VM Management Pack in EM Grid Control 10.2.0.5,
Metalink Note #: 781879.1
6. Oracle VM: Converting from a Local to Shared OVS Repository, Metalink
note # 756968.1
7. How to Add Shared Repositories to Oracle VM Pools with Multiple Servers,
Metalink Note # 869430.1
8. Implementing Oracle Grid: A successful Customer Case Study, Kai Yu, Dan
Brint, IOUG Select Journal Volume 16, Number 2, Second Quarter 2009
9. Deploying Oracle VM Release 2.1 on Dell PowewrEdge servers and
Dell/EMC storage Dell white paper
10. Dell Reference Configuration Deploying Oracle® Database on Dell™
EqualLogic™ PS5000XV iSCSI Storage A Dell Technical White Paper
11. Technical Best Practices for Virtualization & RAC – Oracle RAC SIG
webseminar slides, Michael Timpanaro-Perota & Daniel Dibbets
44