This document provides an overview and guide for implementing and tuning open source applications on IBM PowerLinux systems using virtualization. It introduces common open source applications for web, mail, file, print and network serving. It then describes considerations for installing the Virtual I/O Server (VIOS), creating virtual servers, installing Linux and applications. The guide also covers monitoring with Ganglia and performance tuning. It is intended to help professionals plan and deploy open source solutions on IBM PowerLinux infrastructure.
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
IBM PowerLinux Open Source Infrastructure Services Implementation and T…
1. IBM® PowerLinux™
Open Source Infrastructure Services
Implementation and Tuning Guide
April 23, 2012
(Version 9)
Paul Clarke and Jason Furmanek
Lab Services Power Systems
pacman@us.ibm.com
furmanek@us.ibm.com
2. Summary
The Open Source Infrastructure Services (OSIS) Implementation and Tuning Guide provides an overview of implementa
tion and tuning of open source applications on an IBM PowerLinux system. The document first introduces a popular set of
open source applications in support of web, mail, file, print and network serving. It shows solution scenarios based on open
source applications from the Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) Linux distribu
tions. It then shows considerations for the implementation and tuning of these solutions.
The intended audiences for this guide are professionals planning for open source applications on IBM PowerLinux. It is
one of two OSIS solution guides. For information on OSIS reference architecture and sizing, refer to the second solution
guide, IBM PowerLinux™ Open Source Infrastructure Services Reference Architecture and Sizing Guide.
3. Table of Contents
TABLE OF CONTENTS............................................................................................................................................3
1 INTRODUCTION ......................................................................................................................................................5
2 SOLUTION GUIDE USE...........................................................................................................................................6
3 OSIS REFERENCE ARCHITECTURE..................................................................................................................7
3.1 VIRTUALIZATION WITH IBM POWERVM..........................................................................................................................9
3.2 OPEN SOURCE INFRASTRUCTURE SERVICES (OSIS)...........................................................................................................9
4 SOFTWARE COMPONENTS.................................................................................................................................11
5 INSTALLATION AND CONFIGURATION..........................................................................................................11
5.1 INSTALL PREREQUISITES................................................................................................................................................12
5.2 INSTALLING VIRTUAL I/O SERVER (VIOS)....................................................................................................................12
5.2.1 Accessing FSP Using Advanced System Management (ASM).............................................................13
5.2.2 Installing VIOS.....................................................................................................................................14
5.2.3 Reconfiguring FSP Post VIOS Installation..........................................................................................15
5.2.4 Configuring VIOS.................................................................................................................................15
5.2.5 Accessing IBM Virtualization Manger (IVM)......................................................................................15
5.3 CREATING VIRTUAL SERVERS........................................................................................................................................15
5.3.1 Preparing VIOS for Virtual Servers.....................................................................................................16
5.3.2 Creating Virtual Disks for Virtual Server Storage...............................................................................21
5.3.3 Creating Virtual Servers.......................................................................................................................23
5.3.4 Preparing a Virtual Server for Activation............................................................................................26
5.3.5 Installing Linux on a Virtual Server’s Partition...................................................................................28
5.3.6 Configuring a Linux Software RAID Device........................................................................................33
5.4 CONFIGURING OSIS WORKLOADS.................................................................................................................................34
5.4.1 Configuring webmin for Administration..............................................................................................34
5.4.2 Configuration of Mail Application Server...........................................................................................34
5.4.3 Configuring Postfix Anti-spam.............................................................................................................36
5.4.4 Configure Sending Mail for Performance Testing...............................................................................36
5.4.5 Configure Retrieving Mail for Performance Testing............................................................................36
5.5 MONITORING SOFTWARE...............................................................................................................................................37
5.5.1 Configuring Ganglia on Monitored Virtual Servers............................................................................38
5.5.2 Configuring Ganglia on Collection System.........................................................................................39
6 PERFORMANCE TUNING....................................................................................................................................40
6.1 REFERENCE ARCHITECTURE TUNING PERFORMED ............................................................................................................40
6.2 PHYSICAL AND VIRTUAL PROCESSOR SETTINGS................................................................................................................41
6.3 ADDITIONAL TUNING CONSIDERATIONS...........................................................................................................................42
6.4 IBM JAVA PERFORMANCE TUNING................................................................................................................................43
7 APPENDIX................................................................................................................................................................44
7.1 INSTALL LINUX DIRECTLY USING VNC ........................................................................................................................44
7.2 CREATING PARTITIONS WITH LVM DURING DIRECT LINUX INSTALLATION...........................................................................45
7.3 CREATING PARTITIONS WITH LVM DURING IBM INSTALLATION TOOLKIT INSTALLATIONS.....................................................46
8 REFERENCES..........................................................................................................................................................49
DISCLAIMER AND TRADEMARKS...................................................................................................................51
4. 1 Introduction
A survey by OpenLogic reveals that many enterprises consider using open source and
adoption of open source is on the rise. The IBM PowerLinux Open Source Infrastruc
ture Services (OSIS) reference architecture consists of open source software solutions
available from Linux distributions provided by Red Hat and SUSE and the open source
community.
IBM PowerLinux servers with PowerVM™ offer a highly virtualized, workload opti
mized, cloudready platform that can support more workloads per server and greater
throughput per virtual server – saving up to 16% or more in acquisition costs as com
pared to HP ProLiant systems with VMware1. It is an ideal platform for replacing more
expensive infrastructure applications with robust open source offerings. Open source ap
plications for PowerLinux servers are included and supported with commercial Linux
distributions from Red Hat and Novell.
These open source applications provide a highly flexible environment for IBM Power
Linux servers, which means choices in architecture, sizing and individual server config
urations.
Combining IBM PowerLinux 7R2 with PowerVM's superior virtualization capabilities enables up to 160 simultaneously ac
tive virtual servers on a single 16core system. Such a capable system provides an excellent reference platform for a consol
idated open source solution.
This paper describes a study IBM has performed with several of the open source applications. Based on experiences and re
sults from installation, configuration and performance testing, this paper describes architectures, results from tested configu
rations, along with suggested approaches to scaling, sizing, migration, consolidation, and availability.
1
Achieve 16% lower total cost of acquisition over years with 2 IBM POWER7, two socket, 16core, 3.55GHz servers instead of 4 HP DL380 G7 two
socket, 12core, 3.3 GHz servers, leveraging the higher utilization and virtualization efficiencies of PowerVM.
Performance based on IBM analysis of the SPECint_rate benchmark on HP DL380 G7 two socket, 12core server (3.33 GHz Intel® Xeon®
5680 processor) and on the IBM PowerLinux 7R2 two socket, 16core server (3.55GHz processors).
Prices from www.hp.com (link resides outside of ibm.com).
Assumption is that the throughput impact of VMware is 10%, based on SAP Note 1409608 (10% overhead) and VMware whitepaper “Virtual
izing Business Critical Applications”, 2010 (“keeping virtualization overhead at a very limited at 210 percent “) at
http://www.vmware.com/files/pdf/VMW_10Q1_WP_vSPHERE_USLET_EN_R6_proof.pdf
5. 2 Solution Guide Use
When reviewing the potential use IBM PowerLinux systems with open source applications, OSIS solution guides can be
part of an overall assessment process with a customer. Figure 1 below outlines several phases and activities that are appro
priate in working on an OSIS proposal with a client.
1. Discover client’s technical requirements and usage (hardware, software, data center)
2. Analyze their requirements and current environment
3. Exploit with proposals based on IBM PowerLinux servers and OSIS
Figure 1 Client Technical Discovery, Analysis and Exploitation
6. 3 OSIS Reference Architecture
The IBM PowerLinux Open Source Infrastructure Services (OSIS) solution landscape consists of IBM PowerLinux 7R2
hardware, virtualization software, Linux OS, and open source applications. Refer to Figure 2 below for details.
Figure 2 OSIS Reference Architecture Overview
NOTE: IBM provides the IBM Installation Toolkit with simplified setup tools for several commonly used applications for
mail, web, print, file, and network serving.
The OSIS reference architecture landscape includes the following software:
1. IBM provided:
IBM PowerVM Enterprise Edition – provides virtualization of IBM PowerLinux servers, allowing up to 160 vir
tual servers configured per system. PowerVM includes:
Virtual I/O Server (VIOS) manages system CPU, memory and network resource sharing
Integration Virtualization Manager (IVM) provides a web based virtualization manager
IBM Installation Toolkit a nocharge offering that provides installation of Linux and configuration of open
source applications for web, mail, file, print and network serving
Optional management consoles (for solutions with multiple systems)
Hardware Management Console (HMC) includes both hardware and software used to perform a variety of
system management tasks for all IBM Power Systems, including IBM PowerLinux. In particular, HMC may
be used to create or change logical partitions, including dynamically assigning hardware to a partition.
IBM Systems Director Management Console (SDMC) - provides server hardware management and virtual
ization management.
Optional Cloud Foundation (advanced virtualization with image control)
IBM® Systems Director for IBM PowerLinux™ provides the integrated tools needed to efficiently visualize
and communicate the relationships of physical and virtual systems that are discovered, monitor their health,
define and receive threshold alerts, and update system firmware and operating environments. Builtin automa
tion capabilities enable IT administrators to schedule updates and configuration changes to proactively avoid
problems, and reduce the administrative burden of routine maintenance.
IBM® Systems Director VMControl™ for IBM PowerLinux™provides creations and modification of sys
tem pools using PowerLinux virtual workloads, make dynamic virtual workload adjustments and move work
7. loads within system pools, resulting in an optimized virtual environment with increased resilience to cope with
planned or unplanned downtime.
2. Linux Distributions from Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) provide both
a Linux operating system and the following open source applications of the OSIS landscape:
Postfix provides mail serving by receiving and delivery of mail. Postfix is an alternative to the sendmail server,
which is also included with Linux distributions.
Dovecot and Cyrus provide interoperability with other mail servers and clients using POP, IMAP4 and MSS in
dustry standard protocols
fetchmail provides mail clients with mail retrieval
procmail, SpamAssassin provide mail filtering and antispam protection
smtp-source provides mail generation for performance testing
Apache HTTP Server provides HTTP web serving
MySQL provides database serving
PHP provides PHP scripting
Berkeley Internet Naming Domain (BIND) provides domain name services (DNS)
Squid caching proxy for the web supporting HTTP, HTTPS, FTP, and more
mpstat, nmon, iostat provide system utilization information; CPU, memory, network, disk
3. Other optional (nocharge) open source applications:
Ganglia provides a system resource utilization monitoring system that is highly scalable and customizable
webmin provides a web based administration interface for managing Linux applications and services associated
with the OSIS reference architecture
Drupal provide webbased content management
This solution guide focuses on the following mail and web serving applications that are included with Linux distributions,
optionally installed and configured using the IBM Installation Toolkit.
The web serving applications includes the Apache Web Server (Apache), which typically runs PHP/Perl scripts with a
database such as MySQL. The combination of Linux, Apache, PHP and MySQL is commonly referred to as the
“LAMP” architecture or stack.
The mail serving applications include Postfix, Dovecot and Cyrus. The Postfix mail server supports the Simple Mail
Transfer Protocol, while both Dovecot and Cyrus integrate with Postfix to provide Post Office Protocol (POP) and Inter
net Message Access Protocol (IMAP). A comparison of mail servers is available on Wikipedia.
Typical x86based landscapes for open source mail and web servers consist of multiple application servers and possibly
storage servers. As shown in below, a typical end user request flows as follows:
For web serving,
1. A browser submits an enduser request
that the web server corresponding to the
web page processes.
2. Web server retrieves static and/or dy
namic data from a storage device and re
turns it to the browser.
3. Programs; such has PHP scripts retrieve
dynamic content from a database and return
the content to the browser.
For mail serving,
1. Mail clients send enduser emails to the
mail server in their mail domain.
2. If necessary, the mail server forwards
emails to the mail server of the recipient’s
mail domain.
3. Mail filtering is applied (open source
examples include Procmail Mail Filter and SpamAssassin Mail Filter).
4. The mail server stores mail.
5. The recipient requests mail.
IBM information center provides additional information about running Linux open source applications on IBM Power sys
tems.
8. 3.1 Virtualization with IBM PowerVM
Utilizing IBM PowerVM for virtualization is fundamental to the OSIS landscape. Notice how each of the following goals
for virtualization aligns with requirements for a OSIS reference architecture:
Consolidate multiple environments, including underutilized servers and systems with varied and dynamic resource re
quirements.
Grow and shrink resources dynamically, derive energy efficiency, save space, and optimize resource utilization.
Deploy new workloads by provisioning virtual machines or new systems rapidly to meet changing business demands.
Develop and test applications in secure, independent domains while production can be isolated to its own domain on the
same system.
Transfer live workloads to support server migrations, balance system load, or avoid planned downtime.
Control server sprawl and thereby reduce system management costs.
IBM PowerLinux systems deployed for open source mail and web applications may consist of either addon systems or re
placements of existing systems. Using IBM PowerVM virtualization technology, a single system can run multiple virtual
servers, each running one or more applications. Virtual servers can share physical resources such as processors, memory,
and networking devices. For proper deployment, sizing of these IBM PowerLinux systems and virtual servers is required.
The term IBM Power Systems virtual server is used throughout this paper, instead of the equivalent term Logical Partition
(LPAR) or term Virtual Machine (VM) used with x86based virtualization.
PowerVM is the industryleading virtualization solution for AIX, IBM i, and Linux environments on IBM POWER technol
ogy. PowerVM offers a secure virtualization environment, built on the advanced RAS features and leadership performance
of the Power Systems platform. It features leading technologies such as Power Hypervisor, MicroPartitioning, Dynamic
Logical Partitioning, Shared Processor Pools, Shared Storage Pools, Integrated Virtualization Manager, PowerVM Lx86,
Live Partition Mobility, Active Memory Sharing, N Port ID Virtualization, and Suspend/Resume. PowerVM is a combina
tion of hardware enablement and valueadded software. Additional information is available in the IBM Redbooks publica
tion IBM PowerVM Virtualization Introduction and Configuration, SG247940.
By applying PowerVM virtualization technology, it is possible to configure multiple virtual web, database, and mail servers
utilizing a pool of system resources with one IBM PowerLinux system. Avoid network load from affecting the physical
network by utilizing a high capacity virtual network between these virtual servers. The benefits of virtualizing workloads
with PowerVM in this way include:
Higher resource utilization: Promoting high resource utilization by virtualizing resources including processors,
memory, and I/O across multiple virtual machines.
Consolidation: Hosting diverse workloads on the same server.
Reduced costs: Saving system administrator time and IT infrastructure costs.
Scalability: Simplifying deployment of multiple copies of the same workload type. PowerVM supports virtual servers
as small as 1/10 of a processor core, allowing up to 160 virtual servers on an IBM PowerLinux system with 16 cores.
Recoverability: Bringing a workload back online after an outage, quickly and reliably.
Rapid provisioning: Deploying the readytorun workload, quickly and easily.
Availability: Eliminating planned downtime by moving a running partition to another server, with Live Partition Mo
bility, while upgrading or maintaining hardwarewithout interrupting productive work.
Note that a key difference between the sharing of the physical system resources using PowerVM versus x86based virtual
ization technologies is the handling of virtual server CPU and memory commitments. Unlike x86based virtualization tech
nologies that only add CPU and memory resources to active virtual servers, PowerVM can also remove both CPU and
memory resources, reallocating these resources transparently to other virtual servers when and where the resources are
needed. For example, PowerVM dynamically removes resources from an underutilized virtual mail server and adds the re
source to a virtual web server when needed. This feature, commonly referred to as Dynamic Logical Partitioning (DLPAR),
allows for configuration of a larger number of virtual severs per system, up to 160 virtual servers for a 16 core Power Sys
tem. For additional information on higher system utilization and performance using PowerVM, refer to A Comparison of
PowerVM and x86Based Virtualization Performance.
9. 3.2 Open Source Infrastructure Services (OSIS)
With the adoption of virtualization using IBM PowerVM, it is possible to consolidate PowerLinux Highlight: Consolidate up to
both web and mail to one or more IBM PowerLinux systems using virtual servers as 160 physical servers to a single PowerLinux
shown in Figure 3. Migration of multiple physical servers to virtual servers is possi 16 core system using PowerVM and virtual
ble without changing the network flow or application handling of requests. However, servers.
there are several key differences to point out:
1. It is possible to configure Virtual Ethernet connections directly between web 16 cores per system
x max 10 vCPUs per core
and database virtual servers, and between mail virtual servers. These high speed x max 4 threads per vCPU
(in memory) virtual Ethernet connections reduce IP traffic on network switches, = max 640 threads (or logical CPUs per system)
improving the overall network performance and network
reliability.
2. Virtual servers dynamically share CPU and memory.
Since web and mail workloads vary throughout the day, a
more efficient utilization of system resources occurs.
PowerVMs efficient and dynamic sharing of physical re
sources by the virtual servers enables adding additional
virtual servers to leverage any underutilized systems re
sources. When sizing open source workloads, proper se
lection of hardware, monitoring of system resources and a
careful mix of workloads can result in a highly utilized
system that consolidates a significant number of work
loads on a single server.
Figure 3 OSIS Web & Mail Virtual Servers
10. 4 Software Components
Table 1 below provides a summary of the software components used in the tested OSIS configuration.
Software Description Software Source Documentation
Linux Distribution Red Hat Enterprise Linux Linux distributors (Red Hat or SLES 11
(supported at the time (RHEL) or Novell SUSE dis Novell). If ordered from IBM, RHEL
of this writing: tributions of Linux software IBM provides a key to allow reg
• RHEL 5.7, 6.1, istration and download from the
and 6.2 distributor’s website. Red Hat
• SLES 11 SP1) and SUSE evaluation software is
available.
IBM Installation Provides a UI for install Linux IBM Installation Toolkit website, IBM Installation Toolkit
Toolkit v5.1 on Power Systems, including refer to Download for ISO file. website, refer to Install &
services for installing and con Use for documentation
figuring OSIS. files.
IBM Information Center
Postfix, Dovecot, Linux open source applications Included with Linux distribution Linux manual (man pages)
Cyrus, procmail, for mail services IBM Information Center
fetchmail, SpamAs Postfix documentation
sassin
Apache HTTP serv Open source LAMP stack for Included with Linux distribution Linux manual (man pages)
er, MySQL, PHP web services IBM Information Center
webmin Open source utility providing a webmin website, downloads webmin website
UI for managing Linux ser
vices and applications
Ganglia Monitoring Graphic UI and monitoring Ganglia for IBM Power website Ganglia website
System system for use with multiple
systems including IBM Power
Linux and virtual servers.
mpstat, nmon, iostat Open source utilities for moni Included with Linux distribution Linux manual (man pages)
toring a virtual server’s CPU,
memory, disk I/O and network
I/O
Drupal v7 Content management system Available from Extra Packages Drupal website
written in PHP for hosting web for Extra Packages for Enterprise
sites with both static and dy Linux 6 (EPEL6)
namic content.
Advanced System Provides an interface into the Provided with IBM PowerLinux IBM Information Center
Management (ASM) FSP (service processor) firmware
IBM PowerVM Vir Provides virtualization services Included with IBM PowerVM IBM Information Center
tual I/O Server Editions
(VIOS) Virtual I/O Server provides
information and fixes in
Fix Central
Integrated Virtual Provides UI for managing vir Included with VIOS IBM Information Center
ization Manager tualization Getting started with virtual
(IVM) ization
Table 1 VIOS Software
11. 5 Installation and Configuration
The following sections cover several areas of software installation, including the installation of the PowerVM Virtual I/O
Server, Linux operating system and open source software.
5.1 Install Prerequisites
Below are prerequisites for installation of VIOS and Linux:
1. Hardware
NOTE: A 220v PDU is required for systems ordered with the default 220v power supply (fea
ture code 5603) and power cords (feature code 6671). The 6671 power cord looks similar to a
standard 110v power cord and does allow the FSP to power on with 110v, but the virtual servers
will not start until 220v is used. The system will produce service code 11002613 when attached
to an 110V power source. More information is available at http://publib.boulder.ibm.com/info
center/powersys/v3r1m5/topic/area7/11002613.htm
a. Ethernet cables for each adapter Ethernet port and an additional cable for the HMC port
b. One 9 pin serial cable (with all pins wired) for connecting a PC to the system used during VIOS installs.
c. A RJ45 to serial adapter (IBM Feature code 3930) is required to convert from a DB9 to RJ45 style serial port.
NOTE: This cable is a 'nullmodem' communication cable. If an incorrect (partially wired) cable is used, then the
VIOS install process may fail (terminal window stops responding).
NOTE: For personal computers without a serial port, a USB to Serial adapter may be used.
d. Install Fibre Channel wrap plugs on any open ports. Make sure it is not a Fibre Channel pro
tective covers/dust boots.
NOTE: If an orange bracket used in shipping is connected to the back of the power supply, remove
it and push the power supplies in, then connect the power cords.
2. Media
a. PowerVM VIOS installation DVD
b. IBM Installation Toolkit v5.1 as an ISO file stored on a CD/DVD or on a PC’s disk drive
c. Linux distribution as an ISO file stored on a CD/DVD or on a PC’s disk drive
3. Networking
a. Static IP address for accessing the field service processor (FSP) via the Advanced Management Service (ASM) in
terface
b. Static IP address for accessing VIOS via Integrated Virtualization Manager (IVM) webbased interface
c. Static IP addresses for each of the virtual servers
4. Internet Connectivity to access
a. Linux repositories containing additional open source software
b. IBM.com for IBM Installation Toolkit updates
c. Linux distribution updates
5.2 Installing Virtual I/O Server (VIOS)
This section covers the installation of VIOS without the use of a Hardware Management Console (HMC). This would be
typical of OSIS solutions with a few IBM PowerLinux systems. Larger solutions could utilize HMC or System Director
Management Console (SDMC).
The following sections cover the configuration required to access the Field Service Processor (FSP), changing the system
boot device and the installation of VIOS.
12. 5.2.1 Accessing FSP Using Advanced System Management (ASM)
Access to the Field Service Processor (FSP) is performed using either the CLI or GUI of Advanced System Management
(ASM), or both. The ASM GUI is more user friendly for overall configuration of the FSP, while the ASM CLI is necessary
to change the system’s boot configuration. The following sections cover the configuration of both interfaces.
5.2.1.1 Configuring ASM Command Line Interface
The ASM Command Line Interface (CLI) is used for configuring the FSP to boot from the VIOS installation media. A PC
can access the ASM CLI over a serial connection using serial terminal emulator software such as Microsoft HyperTerminal
or PuTTY.
For a Linux based PC, minicom or other terminal emulator software may be used.
The following section covers PuTTY setup only. Note PuTTY is also very useful for providing remote terminal sessions
into Linux virtual servers.
1. Connect the power cords to the PDU and wait until the control panel displays
“01” (a series of program codes appear first).
NOTE: Do not connect an Ethernet cable to HMC1 or HMC2 ports yet (done lat
er).
NOTE: The system is powered on if the light on the control panel is green.
NOTE: To view the control panel, press the blue switch to the left, then pull out
the control panel all the way, and then pull it down.
2. Install PuTTY on a personal computer
3. Connect the serial cable from the top RJ45 connector found next to the PowerLin
ux power supply.
4. Connect the serial cable directly to the PC’s serial connector or use a serial to
USB adapter to connect to a USB port.
5. Run PuTTY
6. From the PuTTY interface
a. For Connection type click Serial
b. For Serial line type COMx, where x is COM port associated with the serial connector. If not known, try different
COMx values until a connection is made.
c. For Speed, enter 19200 (parity is:none and data/stop bits are 8/1)
d. Click Open to open a terminal emulation window over the serial connection to ASM.
e. Press Enter to request a response from ASM. If a connection is not made, try a different COMx port.
f. Log in (default ID is admin and password is admin)
5.2.1.2 Configuring ASM Graphical User Interface
Access the ASM Graphical User Interface (GUI) using a web browser and a private Ethernet connection. By default, the
IBM PowerLinux system ships with a default IP address as indicated below. Optionally configure ASM to use a different
static address that would provide additional network connectivity.
NOTE: For previously configured systems where the FSP IP address may not still be the default value, use ASM CLI to dis
play the IP address of the FSP by clicking on; Network Services, Network Configuration, IPv4, and Configure interface
Eth0. If the FSP IP address has changed, then use the new values in the next steps.
First, use the system’s default IP address to access the ASM GUI by
1. connecting an Ethernet cable from a personal computer (PC) to either HMC port 1 or 2 identified with labels in the
back of the PowerLinux system
2. setting the PC’s TCP/IP connection properties to
a. an address in the same subnet as the current FSP IP address. For example when using FSP factory default address:
169.254.2.148 (when using HMC port 1) or 254.3.148 (when using HMC port 2)
b. subnet mask of 255.255.255.0
3. Open a browser to the FSP IP address. For example when using a factory default address use either
a. https://169.254.2.147 (HMC port 1) or
b. https://169.254.3.147 (HMC port 2)
13. Once connected to ASM using the default IP address, ASM can be configured with a static IP address as follows:
1. from the ASM interface, open the Network
Configuration panel as follows:
a. click Expand all menus
b. click Network Services
c. click Network Configuration
2. Update the Network interface eth0 with the in
formation for the new static IP address to be
used by ASM
3. Save the configuration
4. Move the Ethernet cable from the PC to an Ethernet switch associated with the new static IP address for ASM
5. Connect a personal computer to the same network that as the IP address for ASM
6. Open a browser to the IP address for ASM
5.2.2 Installing VIOS
The OSIS architecture requires the installation of VIOS for virtualizing resources. The following sections cover the installa
tion of VIOS from the PowerVM media and configuring VIOS with a TCP/IP address, allowing access of the Integrated Vir
tualization Manager (IVM) from a web browser.
Installation of the PowerVM Virtual I/O Server (VIOS) from the PowerVM product DVD media as follows:
1. Insert the DVD with VIOS into the PowerLinux DVD drive.
2. Access the ASM console using a web browser (see previous sec
tion for URL)
3. Authenticate with ID and password, which are by default admin
and admin
4. Configure system to boot to system from PowerVM VIOS
CDROM
a. Select Power/Restart Control
b. Select Power On/Off System
c. Select Running (Auto-Start Always) for Server firmware
start policy
d. Select Boot to SMS menu for AIX/Linux partition mode
boot
e. Click Save settings and power on
5. Use PuTTY to connect to the ASM CLI (see previous section for serial connection information).
6. PowerVM Firmware boot prompts
a. optionally select a language, press Enter to continue
b. press 2 and Enter to select Continue Boot
c. press 1 and Enter to Accept the license
d. Press 1 to select SMS Menu
7. Use SMS Menu to change the boot order
a. Press 2 to Continue Password Entry
b. Enter SMS password (admin by default)
c. From Main Menu, press 5 for Select Boot Options
d. From Multiboot, press 1 for Select Install/Boot Device
e. From Select Device Type menu, press 7 for List All Devices
f. From List All Devices menu, select the CD/DVD device
g. Select 2 for Normal Boot Mode
h. Select 1 to exit SMS
8. After a few minutes, a prompt appears requesting selection of the device to use for the console. Select the current de
vice to use as the console.
9. After a few more minutes, VIOS will start installing
10. After a few more minutes, select the language to be used in the install
11. To install PowerVM Enterprise Edition
a. from the Installation and Maintenance menu, select 2 for Change / Show Installation Settings and Install
b. From the settings menu,
i. Select 5 for Select Edition to change the edition to from the Express default value to standard. Select option
5 again to change the value to Enterprise.
14. ii. select 0 to install with the settings listed
12. Installation takes about 30 minutes. Status is displayed during the install
13. After installation, system reboots back to the SMS menu
14. Reset the boot device to use the disk used during VIOS install (hdisk0 by default). Use the procedures listed in step 7
above to change the boot device.
5.2.3 Reconfiguring FSP Post VIOS Installation
During VIOS installation to provide control over the boot order, configura
tion of FSP occurred to boot to SMS. With VIOS installed, use one of the
ASM interfaces to reconfigure FSP to boot to VIOS and not stop at SMS.
The following covers the configuration via the ASM GUI (use of the ASM
CLI would require restarting the system);
1. Click Power On/Off System
2. For AIX/Linux partition mode boot, select Continue to operation
system
3. Click Save settings (see below)
4. At this point, if the system reboots, VIOS will automatically start.
5.2.4 Configuring VIOS
After configuring VIOS, continue to use the same PuTTY terminal session to configure the date and TCP/IP settings for ac
cessing IVM from a browser.
1. Sign onto VIOS for the first time, accept terms:
a. When prompted, login in with using the “padmin” ID
b. Enter a new password
c. Press Enter to accept the VIOS maintenance terms and
conditions
d. Enter license –accept to accept the license terms
2. Use cfgassist command to change the date/time and configure
TCP/IP VIOS.
3. Optionally configure TCP/IP for VIOS from the command line as follows
a. Enter lstcpip –adapters to display a list of Ethernet adapters
b. If you are unsure of which physical adapter the interface you want to use refers to, enter the following command
to get the physical location code:
lsdev -dev entx -field name description physloc status -fmt ,
where entx is a particular interface. Notice the space before the comma.
The output will look similar to the screenshot above. The physical location code in this example is:
U78AB.001.WZSGRE5-P1-C7-T1
The pertinent information is “P1C7T1”, which refers to a physical slot (P1), the slot number C7 (which can be
located by finding the numbered slots on the rear of the physical frame), and port number 1 (T1).
c. Configure TCP/IP using the mktcpip command by providing IP address information and the interface, enx for the
entx adapter (see screenshot to right).
d. Example
mktcpip -hostname powerlinuxivm -inetaddr 9.5.110.171 -interface en0
-start -netmask 255.255.255.0 -gateway 9.5.110.1 -nsrvaddr 9.10.244.100
-nsrvdomain rchland.ibm.com
e. Enter lstcpip -stored to verify the configuration
f. Enter lstcpip –routtable to verify routing
15. 5.2.5 Accessing IBM Virtualization Manger (IVM)
With VIOS installed, visit the IVM web interface from a web browser:
1. Enter a URL containing the VIOS TCP/IP address configured above, i.e.
(http://9.5.110...).
2. Login using the User ID of “padmin” and the password configured while in
stalling VIOS.
5.3 Creating Virtual Servers
After installing the Virtual I/O Server (VIOS) and accessing the Integrated Virtualization Manager (IVM) through its web
interface, configure virtual servers for running Linux and the OSIS applications.
The following sections cover additional general VIOS configurations and how to configure a virtual server.
5.3.1 Preparing VIOS for Virtual Servers
The following sections cover several VIOS configurations to consider before creating virtual servers.
5.3.1.1 Optional: Updating the VIOS Name
To change the system name:
1. Click View/Modify System Properties
2. Click on the General tab
3. After System name, type the new name
4. Click OK to save the change
5.3.1.2 Mirroring VIOS Storage
Mirroring the hard disk containing VIOS provides redundancy in case of disk failure or maintenance. VIOS is installed on
hdisk0 by default, which is included in the rootvg storage pool. To mirror the VIOS hdisk:
1. Add a second disk to the same storage pool containing the disk with VIOS:
a. Click on View/Modify Virtual Storage
b. Click on the Physical Volumes tab
c. Add a check mark in the box next to the second disk
(For example, to add hdisk1 as a second disk, click the box next to hdisk1.)
d. From the More Tasks drop down list, select the Add to storage pool task
e. For the Storage pool drop down, select the rootvg storage pool
f. Click OK
(The second hdisk will be added to the rootvg storage pool.)
2. Enable the second disk as a mirror of the first:
a. Open a terminal window:
i. Click on View/Modify Partitions
ii. Check the box for the VIOS partition
iii. From the More Tasks drop down list, select Open ter-
minal window
16. NOTE: If the terminal window does not open or closes immediately (the browser applet does not work), then
open a Secure Shell (SSH) terminal session to the VIOS, using the IP address for VIOS configured earlier. (See
References for SSH tools.)
b. From the terminal window, enter the following command to mirror
the VIOS disk and force a reboot:
mirrorios –f -defer hdisk1
c. If “-f” option is not used, enter y to continue and reboot the system
NOTE: If the VIOS does not automatically restart, review the pow
er settings as seen in 5.2.2 step 7 (normal boot mode) and 5.2.3
(boot to OS).
NOTE: The option “-defer” instructs the command not to reboot the VIOS. VIOS 1.5 or later does not require a
reboot after the mirrorios command, so you should use the -defer option for these levels.
5.3.1.3 Optional: Updating VIOS Properties
To update the VIOS partition properties (name, memory, processor, and network) using IVM:
1. Click on View/Modify Partitions in the left navigation area
2.
3. Click on the VIOS partition's name link to display its properties
4. Update the VIOS partition's name (which by default is based on the serial
number of the system):
a. Click on the General tab
b. For Partition name, enter a new name
5. Update Assigned memory:
a. Click on the Memory tab
b. Enter the desired Assigned memory in the field for Assigned memory
(VIOS 2.2 uses about 4 GB of memory when running IVM)
c. Increase or decrease Minimum memory and Maximum memory to ac
commodate. Assigned memory must be greater than Minimum memo-
ry and less than Maximum memory.
d. Leave the default values for Processing Units and Virtual Processors,
but monitor utilization.
6. To provide additional security, update the Ethernet ports used by VIOS to use a dedicated port instead of a port shared
with other virtual servers:
a. Click on the Ethernet tab.
b. Check the first two ports and uncheck the last
two ports. The last two ports are configured lat
er as shared ports for use by the virtual servers
running Linux.
7. Click OK when done to update the VIOS properties
17. 5.3.1.4 Creating a Storage Pool for Virtual Servers
Virtual servers running Linux should not use the same system physical disk drive used by VIOS. Therefore, virtual servers
running Linux require a separate storage pool containing their disks.
Create a new storage pool with the remaining internal disks:
1. Click on View/Modify Virtual Storage
2. Click on the Storage Pools tab
3. Click on Create Storage Pool
4. Enter a name (1 to 15 characters, no spaces)
5. Leave Logical volume based for the type
6. Select the option to have this storage pool to be used as the default
7. Select the remaining hard disks
8. Click OK to create the storage pool with the selected disks
18. 5.3.1.5 Creating a Virtual Ethernet Bridge for Virtual Servers
There are 2 ways to give network connectivity to Virtual Servers:
1. Dedicated Networking: Native adapter ownership, which consists of assigning an entire adapter to a virtual server.
2. Shared Networking: Virtual Ethernet bridging using a Shared Ethernet Adapter
For this paper, we have more virtual servers than physical adapters, so a Shared Ethernet Adapter (SEA) needs to be config
ured. Create a SEA using an Ethernet adapter managed by VIOS as follows:
1. Decide upon a virtual networking layout
a. A virtual network bridge will be associated with a particular virtual network by VLAN ID.
b. Virtual network adapters need to have the same VLAN ID in order to communicate with each other.
c. The Integrated Virtualization Manager defines 4 virtual Ethernet adapters in the VIOS partition by default, with
VLAN IDs of 1, 2, 3, and 4 respectively.
d. Pick an available VLAN ID, or create a new virtual adapter
on the VIOS with a new VLAN ID.
2. Define the Virtual Ethernet Bridge:
a. Click on View/Modify Virtual Ethernet in the left navigation
area
b. Click on the Virtual Ethernet Bridge tab
c. For Virtual Ethernet ID 1, select the same physical adapter
from the list to use as the backing device. You can determine
which interface is which by looking at the location codes pro
vided in the list.
d. Note that selecting two different physical adapters results in
two SEAs being created, not one with two ports)
e. Click on Apply
3. Verify that the SEA was created:
a. Click on View/Modify TCP/IP Settings in the left navigation area
b. Click on the 'Interfaces' tab
c. Look for a new enx interface with the description:
Shared Ethernet Adapter - Standard Ethernet Network Interface
4. Optionally, you can verify that the SEA was created on the command
VIOS line:
a. Use PuTTY to connect to VIOS
b. Enter lsmap –all –net to view the virtual network mappings
The example on the right shows ent2 as an SEA using ent0 using the
virtual adapter ent0 and the physical location
NOTE: An interface thats in use by the VIOS cannot be designated as the backing device in an Shared Ethernet
Adapter. We originally configured and interface for use by the VIOS. To convert this to an SEA, the configured
interface must first be unconfigured using the serial terminal and the VIOS command, 'rmtcpip enx', for interface
enx. You can then proceed to create the SEA as above using that interface as the backing device, followed by a re
configure of the networking on the VIOS (see section 5.2.4) using the SEA adapter interface.
19. 5.3.1.6 Creating a Virtual Optical Library for Linux Installation Media
A virtual optical library allows ISO files to be stored on the server, for later use by the virtual servers. For example, the ISO
files for the Linux distribution and the IBM Installation Toolkit can be stored in the library and used at any time for the in
stallation and configuration of Linux.
Create a virtual medial library:
1. Click on View/Modify Virtual Storage in the left navigation area
2. Click on the Optical/Tape tab
3. Click on Create Library
a. For Storage pool name, select the second storage pool created
earlier, not the rootvg storage pool which is recommend for
VIOS use only
b. For Media library size, 8 GB is sufficient for both the Linux
distribution and the IBM Installation Toolkit. If necessary,
later extend the library to a larger size.
4. Add the ISO files:
a. Click Add Media
NOTE: When installing Linux using the IBM Installation Toolkit, first add the
Linux ISO file to the library first, and then add the IBM Installation Toolkit
ISO file. This avoids a problem later where the Installation Toolkit cannot find
the Linux media during the install.
b. To add media using ISO files transferred using a web browser:
i. Select Upload media
ii. For Media type, select Read only
iii. For Optical media file to upload, click Browse and select the ISO
file from the workstation directory
iv. Click OK to copy the file into the library (this may take some time)
NOTE: Because of a 2 GB limitation on the media file size for media uploaded using a web browser, select
from one of the other options listed below for the Linux distribution ISO file that is >2 GB.
c. To add media using FTP:
i. Sign into VIOS using padmin
ii. Use FTP to transfer the ISO images to a local VIOS directory
iii. To copy the ISO file into the optical library, either:
i. From the VIOS command line:
mkvopt <filename> -file <source filename> -ro
or,
ii. From the IVM Virtual Optical Media panel:
1. Select Add existing file
2. For Media type, select Read only
3. Specify the name of the ISO file, for example: /home/padmin/RHEL6.0.iso
4. Click OK
NOTE: The name of the ISO must be all alphanumeric and not longer than 30 characters. Rename
the ISO file accordingly.
d. To add media using the system’s DVD drive
i. Select Import from physical optical device
ii. For Media type, select Read only
iii. For Media name, enter cd0
iv. Select the media device from the table shown
v. Click OK (this may take some time)
20. 5.3.2 Creating Virtual Disks for Virtual Server Storage
Virtual Servers require a virtual disk. For OSIS there are several considerations:
1. Virtual disks associated with internal storage
2. Virtual disks associated with external storage
3. Use of Linux software RAID support (redundancy and performance)
The following sections discuss configuration of virtual disks for internal storage, including Linux software RAID. External
storage is dependent on the type of external storage being used and is not covered in this paper.
5.3.2.1 Creating Virtual Disks for use with Linux Software RAID
When using Linux software RAID support for a virtual server, create multiple virtual disks from different physical disks.
Creating multiple virtual disks on the same physical disk causes performance degradation in a RAID configuration. Linux
software RAID support involves the following operational steps:
Operation Comment User Interface
Creating multiple virtual disks Each virtual disk should be created from a different physical disk for performance and redundan VIOS CLI(1,4)
cy. Virtualization Managers (2) do not support this.
Assigning the virtual disks to a Details are provided in section 5.3 Creating Virtual Servers on page 15 Virtualization Man
virtual server agers(2)
VIOS CLI(1)
Creating Linux Software RAID Details are provided in section 5.3.6 Configuring a Linux Software RAID Device on page 32 Linux CLI(3)
device
Table notes:
1. VIOS CLI is the Virtual I/O Server command line interface accessible from a remote terminal such as PuTTY.
2. Virtualization Mangers include
a. Integrated Virtualization Manager (IVM)
b. Hardware Management Console (HMC)
c. Systems Director Management Console (SDMC). For information on SDMC, see IBM Systems Director Management
Console: Introduction and Overview.
3. Linux CLI is a Linux command line interface accessible from a remote terminal such as PuTTY.
5.3.2.2 Creating Virtual Disks using the VIOS Command Line
Since the available virtualization managers do not support specifying physical disks when creating a virtual disk, we can use the VIOS
command line to create virtual disks when preparing for a software RAID configuration. Use the following VIOS command to create a
virtual disk on a specific hard disk:
mklv -lv <logical volume / virtual disk name> <volume group / Storage Pool name> <size in GB>G <hdisk to use>
Example: mklv -lv mail2.2 Clients 30G hdisk2
o mail2.2 is the virtual disk name
o Clients is the existing Storage Pool to use
o 30G is the size of the virtual disk (G is used instead of GB)
o hdisk2 is the existing physical disk drive to create the virtual disk (file) on
Continue creating as many virtual disks in the RAID as needed, limited by the number of physical disks available.
5.3.2.3 Creating Virtual Disks Using IVM
Virtual disks can be created during the creation of a virtual server or independently.
To use IVM to create a virtual disk independent of the virtual server:
1. Click on View/Modify Virtual Storage in the left navigation area
2. Click on the Virtual Disks tab
3. Click on Create Virtual Disk
4. Update the Create Virtual Disks page as follows:
a. For Virtual disk name, consider a name that associates this disk with the corresponding virtual server
21. b. For Storage pool name, use the storage pool that was created for the
Linux virtual servers, not the rootvg storage pool used by VIOS
c. For Virtual disk size, at least 5 GB is required to install Linux, but a
larger size is likely required for application data requirements.
d. For Assigned partition, select a virtual server from the list or None
when the assignment is to be performed later
e. Click OK to create the virtual disk
5. Additional virtual disks can be created now or later
22. 5.3.3 Creating Virtual Servers
Virtual servers are created and managed by a virtualization manager (IVM, HMC). This section covers an example of creat
ing a virtual server of use with Linux and OSIS.
Since a typical OSIS implementation would have multiple virtual servers for the various workloads of web, mail, file, print
and networking, it is initially helpful collect information into a reference table. Below is an example used while developing
the OSIS solution guides.
Name Description Parti- Memory Processors Networking Storage Virtual Optical
tion ID (physical and Media
virtual)
Mail Runs the prima (default) 2 to 3 GB 1 to 4 pCPU Shared Ethernet 3 Virtual Disks Linux ISO
ry mail server 1 to 4 vCPU (MailDisk13) Toolkit ISO
MailWL Runs mail (default) 1 GB 0.5 pCPU Dedicated Eth 1 Virtual Disk Linux ISO
workload gener 1 vCPU ernet (MailWLDisk) Toolkit ISO
ation utilities
Table : Virtual Server Reference Table2
The following is an example of creating a virtual server using IVM:
1. To launch the wizard for creating a partition for the virtual server
a. Click on View/Modify Partition in the left navigation area
b. Click on Create Partition
2. On the Create Partition: Name panel
a. For Partition ID, optionally change the ID (not very significant
or used very often)
b. Partition Name is the name of the virtual server and should be
unique and meaningful
c. For Environment, select AIX or Linux from the drop down list
d. Click on Next to continue with the wizard
3. On the Create Partition: Memory panel
a. For Assigned Memory, specify at least 2 GB initially, this val
ue can be reconfigured later
b. Click on Next to continue with the wizard
23. 4. On the Create Partition Processors panel
a. For Assigned processors, select at least 1 from the list to indicate
the number of virtual processors to be used.
b. For Processing Mode, select Shared to allowing dynamic sharing of
processors. Shared vs. Dedicated mode can not be dynamically
changed, however a virtual server’s processor sharing priority can be
configured later, including no sharing, which is effectively the same
as Dedicated.
c. Click on Next to continue with the wizard
NOTE: Reconfigured of these values can be done later.
5. On the Create Partition: Ethernet panel
a. For Ethernet networking there are several possible configurations depending on whether sharing of Ethernet ports
between the virtual servers. Below are two example configurations used for the OSIS Solution Guides
b. For shared networking managed by VIOS, use the Shared Ethernet
Adapter (SEA ) as follows (see example to right)
i. For Virtual Ethernet Adapters, select the SEA adapter shown
in the drop down list
ii.
c. For dedicated networking, assign an available Ethernet Adapter
natively to the partition. (example not shown) This is done after the
partition has been created by going to the View/Modify Physical
Adapters window and changing the partition assignment of the de
sired adapter.
i. For Virtual Ethernet Adapters, select None for all of the
adapters
d. Click on Next to continue with the wizard
6. On the Create Partition: Storage Type panel
a. For Storage Type, select either Create virtual disk or Assign ex-
isting virtual disks and physical volumes (if previously created).
NOTE: Creating a virtual disk now is similar the process covered in
5.3.2.3 Creating Virtual Disk on page 20.
NOTE: Using a physical volume dedicates a disk for using by only
one virtual server, limiting storage options for other virtual servers.
NOTE: Virtual servers can have more than one virtual disk as
signed. For example when planning for a Linux software RAID config
uration
b. Click on Next to continue with the wizard
24. 7. On the Create Partition: Optical/Tape panel
a. Click on Modify to show the contents of the media library
b. Select the IBM Installation Toolkit media from the list
c. Click on OK to save this device definition
d. Repeat the steps above to create a 2nd device for the Linux distribution media
e. Click on Next to continue with the wizard
8. Click on Next to view a summary
9. To make changes, click links in the left navigation area
10. Click on Finish to create the partition definition for the virtual server
25. 5.3.4 Preparing a Virtual Server for Activation
After creating a partition for a virtual server using the IVM Partition Creation wizard, consider the following changes to the
configuration before installing Linux:
1. Click View/Modify Partitions in the left navigation area
2. Click the name of the partition in the list for the virtual server’s partition to modify
3. On the General tab:
a. For Boot mode, consider setting to Systems Management Services (SMS) to stop the initial boot at the SMS menu,
which is necessary (later) to change the boot device to a virtual optical device associated with the Linux media.
4. On the Memory tab:
a. For Minimum memory, set the minimal amount of memory required to start the virtual server. Determining this
value is dependent on the Linux applications used. Recommendations:
i. Initially set this to 2 GB while installing Linux and the applications.
ii. After configuring the virtual server, use the nmon utility to monitor for required memory and reset this value
accordingly.
NOTE: changing this value will require a restart of the virtual server
b. For Assigned memory, set to the amount of memory the virtual server should run with.
c. For Maximum memory, set the value to match the upper memory range for the virtual server. An administrator
can then later change the virtual server’s Assigned memory up to this Maximum value.
NOTE: changing this value will require a restart of the virtual server
d. The example to the right provides a virtual server configuration with
(pending) changes for:
i. At least 0.5 GB of memory during startup
ii. Up to 1 GB of memory dynamically while running if needed
and available
iii. Up to 2 GB of memory that an administrator can later assign to
the virtual server. The assigned value can be set to a value of
from 0.5 GB to 2 GB. The virtual server memory will then grow to this new value if memory is needed and
available (from the virtual server shared memory pool).
iv. Note that pending changes will occur when the virtual server’s parti
tion is started, or dynamically if the virtual server is running with a
Linux configured for dynamic LPAR operations (DLPAR), which can
be determined using Retrieve Capabilities button on the partition
properties General tab
5. On the Processor tab:
a. Processing Units relates to the number of active physical processors in
the PowerLinux system to be assigned to the virtual server, specified in
units of 1/10. Examples are 1.0 and 1.1.
i. For Minimum, set the minimal amount of physical CPU (pCPU)
required to start the virtual server. Determining this value is de
pendent on the Linux applications used. Recommendations:
1. Initially set this to 1 while installing Linux and the
applications.
2. After the virtual server is configured, use the
nmon or mpstat utilities to monitor for required
pCPU to start the virtual server and reset this val
ue accordingly.
NOTE: changing this value will require a restart of the virtual server
ii. For Assigned, set to the amount of pCPU the virtual server should run with. This value can be changed dy
namically while the virtual server is running.
iii. For Maximum, set the value to match the upper pCPU range for the virtual server. Later, an administrator
can change the virtual server’s Assigned pCPU up to this Maximum value.
NOTE: changing this value will require a restart of the virtual server
b. Virtual Processors relates to the processors as seen by Linux, assigned in whole units.
26. NOTE: By default on POWER7, Simultaneous Multithreading (SMT) of 4 threads causes Linux to see 4x CPUs as
specified by the vCPU value. For example, running a virtual server with 1 vCPU causes Linux to perceive there
are really 4 CPUs.
NOTE: Increasing the number of vCPUs (and thus threads) may in some cases help, depending on the number of
applications and their ability to utilize multiple CPUs effectively.
NOTE: When generating workloads for performance testing, as the number of vCPUs increases, it becomes more
difficult to drive workloads across all vCPUs,
NOTE: Creating a high number of vCPUs with a low number of pCPUs can create unnecessary overhead within
PowerVM.
i. For Minimum, start with a value of 1 for the initial Linux install and configuration of the workload. Later,
this value can be changed to align with changes to the Minimum pCPU.
ii. For Assigned, specify a value of at least one. Refer to the note about not setting this value too high. This
value can be changed while the virtual server is running, up to the value specified for Maximum. Consider
changing this value if the Assigned pCPU value changes.
iii. For Maximum, specify a value for the highest number of vCPUs expected.
iv. The example to the right provides a virtual server configura
tion with (pending) changes for:
1. At least 1 pCPU and 1 vCPU during startup
2. Up to 2 pCPUs and 2 vCPUs dynamically
allocated while running if needed and avail
able
3. Up to 4 pCPUs and 4 vCPUs that an administrator can later assign to the virtual server. The
assigned vCPU and pCPU values can each be set to a value of from 1 to 4. The CPU values
grow to this new value if CPU resources are needed and available (from the virtual server
shared processor pool).
v. Note that pending changes will occur when the virtual server’s partition is started, or dynamically if the virtu
al server is running with a Linux configured for dynamic LPAR operations (DLPAR), which can be deter
mined using Retrieve Capabilities button on the partition properties General tab.
vi. When sharing CPU resources between virtual servers (that is
running uncapped), the pCPUs assigned to a virtual server
can dynamically increase to the value specified for the As
signed vCPU. Generally this is desired, however to prevent
dynamic changes of the pCPU, an administrator can change
the priority to None – Uncapped while the server is running.
This was necessary during VIOS Solution Guide testing to
measure running a virtual server with 0.1 pCPU (by default, the pCPU dynamically would changed to match
the vCPU of 1.0).
NOTE: When modifying both CPU and memory settings of a virtual server that is running, perform this as
two separate operations instead of one.
NOTE: When modifying memory settings of a virtual server that is running, consider 4 GB or less changes
to allow for dynamic memory adjustments with reduced paging by Linux
NOTE: When monitoring CPU, utilities such as vmstat will report CPU cycles shared with other virtual
servers as “stolen”. Refer to Measuring stolen CPU cycles for more details. Run “mpstat P ALL” to show
CPU utilization for all logical CPUs.
27. 5.3.5 Installing Linux on a Virtual Server’s Partition
Once a virtual server’s partition is defined with the system resources to be used, it is ready for activation and installation of
Linux. The following scenario covers an example of starting the partition and installing Linux using IBM Installation Tool
kit v5.1 to install Linux from Linux distribution media.
5.3.5.1 Activating the Partition
Since Linux is being installed, the partition requires booting using the IBM Installation Toolkit media that was configured in
the Virtual Optical Library and assigned to the partition in section 5.3.1.6. Temporarily change the partition’s boot process
to boot from the corresponding virtual optical device as follows:
1. To access SMS before the partition is activated establish a terminal
window. The following are two methods for creating a terminal ses
sion:
a. IVM supports opening a terminal window for a partition:
i. Click on View/Modify Partitions in the left navigation area
ii. Select the check box for the partition to activate
iii. From the More Tasks drop down list select Open terminal
window
NOTE: There is a known problem with some browser con
figurations causing a terminal window to open and immedi
ately close. If this occurs, use the second method below for
establishing a terminal window.
b. A terminal window to a partition can also be opened from the VIOS command line
as follows:
i. Using PuTTY, open a terminal window to the VIOS command line (covered
in previous sections of this paper)
ii. Enter the following VIOS command:
mkvt –id <partition ID>
NOTE: <partition ID> is found in the second column of the IVM list found
in View/Modify Partitions panel.
NOTE: If an error indicates a terminal session is already open, enter the following VIOS command to re
move the existing session (and try again): rmvt –id <partition ID>.
iii. Authenticate with the VIOS ID / password (padmin / padmin by default)
2. With a terminal window open to the partition, activate the partition as follows:
a. From the IVM View/Modify Partitions panel, select the partition
b. Click on Activate
c. Click on OK to confirm that activation
3. Monitor the partition’s boot process from the terminal window and watch for the SMS menu to dis
play.
4. Once the SMS menu displays press 1 to enter start using SMS.
a. Note that if the partition was previously defined to stop at the SMS menu, this is not necessary
5. From the SMS menu, to change the boot device to the vir
tual optical device associated with the IBM Installation
Toolkit, follow these steps (similar to the process used
earlier when installing VIOS):
a. From Main Menu, press 5 for Select Boot Options
b. From Multiboot, press 1 for Select Install/Boot De-
vice
c. From Select Device Type menu, press 7 for List All
Devices
d. From the List All Devices menu, select the SCSI CD
ROM device associated with the virtual device for the
28. IBM Installation Toolkit. The virtual device’s location ID corresponds to the virtual media assignment order for
the partition. For example, the ID containing L82 is the first media assigned and L83 is the second media assigned.
NOTE: Based on the assignment of the IBM Installation Toolkit to the second optical device earlier in IVM, boot
from device 2 associated with LUN 83.
e. Select 2 for Normal Boot Mode
f. Select 1 to exit SMS
NOTE: At this point, the system will begin the installation process for the partition. Worth noting here is that IBM Power
Linux systems will not permit running AIX or IBM i. When you try to deploy using VMControl an AIX or IBM i virtual
appliance to an IBM PowerLinux server , the task completes with errors. The following error message appears deploy task
job log:
DNZIMN882E The deploy task is not progressing and has timed out.
5.3.5.2 Installing Linux using IBM Installation Toolkit
As mentioned earlier, use of the IBM Installation Toolkit to install Linux is important since it configures IBM’s value
add software. This includes
1. Performing dynamic resource operations on virtual servers, such as adding and removing both CPU and memory
resources.
2. For VIOS, provides installation and configuration of web (LAMP), mail, print, file and networking services.
From a terminal window of a partition booting from the IBM Installation Toolkit v5.1, use the following steps to install
the toolkit and Linux.
1. When prompted, select the install language and keymap to use. English is the default and used when not provid
ing a response.
2. IBM Installation Toolkit v5.1 asks for a networking configuration up front. This is necessary to check for certain
package updates.
3. Select 1 for Wizard mode to start the toolkit wizard’s for install Linux. While in the toolkit wizard, use the tab
key to navigate to menu options displayed in red.
NOTE: Some menu lines may not be display until enlarging the terminal window and/or navigating to the
nondisplayed line. That is, to display some options, keep pressing the tab key.
4. Select Accept (at the last line of the menu) to accept the license. As mentioned above, use the tab key to navigate
to the menu options.
5. Select Install to start the install process
6. Select the correct version of Linux to be installed as follows:
a. Tab to Linux distribution
b. Press enter to display the selection window
c. Use up/down arrow keys to highlight the correct selection
d. Press Enter to select the value
7. Select the type of Linux install to perform. For OSIS, consider selecting Full since various open source packages
are used.
a. Tab to Installation profile
b. Press Enter to display the field selection window
c. Press downarrow to highlight your selection
d. Press Enter to select the value
8. Optionally, select an automatic or manual disk partitioning
scheme
a. Tab to Installation profile
b. Press Enter to display the field selection window
c. Use the arrow keys to highlight your selection
d. Press Enter to select the value
29. 9. With the Installation settings complete:
a. tab to Next
b. hit Enter to proceed with the wizard
10. Select the open source workload(s) to configure during
the installation
a. Tab to one or more workloads
b. Press Enter to select the workload
c. Select Next when selections are completed
11. Verify the installation sources.
a. Distro source and IBMIT source should show:
[CD/DVDROM]
NOTE: Optionally, you can enter a valid URL to provide
access to installation media
b. Select Next when selections are completed
12. To configure networking
a. tab to each of the fields and hit enter to change their value at the
bottom of the window, enter the new value
b. select Configure to change the virtual server’s IP address infor
mation
c. enter a DNS server to allow IBMIT to look for package updates
13. Update the networking information
a. tab to each field to be updated
b. at the bottom of window, enter the new value and press Enter
c. when done, select Save to save the network IP information
14. Select Next to complete the network configuration and continue with
the wizard.
15. Update the Linux settings for keyboard, mouse, password, etc.
16. Select additional IBM Repositories to enable for the Linux installa
tion
The ibmpowerrepo repository contains IBM valueadd productivity
tools that enable dynamic partitioning among other features. The repository will allow the installation to easily
keep these tools up to date.
30. 17. Select packages to install
a. all base required packages are selected by default
b. scroll down to select additional packages if required
c. select Next to continue (need to scroll down to see this
option)
NOTE: for RHEL 6, do not install the powerutils package
since installation of the correct version occurs automatically
from the RHEL 6 distribution media
18. Accept the license and click Next to continue
19. Review the summary
a. Tab to menu options are at bottom of the display
b. select Previous to make changes
c. select Next to continue with the wizard
20. The IBM Installation Toolkit continues with the install of Linux
NOTE: if prompted for the Linux install media (could not find
a 2nd virtual device with the Linux distribution), use IVM to
change the partition's virtual media library to use the Linux .iso
instead of the IBM Installation toolkit as describe in section
5.3.1.6.
a. when ready, select Next in the terminal window to contin
ue the IBM Installation Toolkit wizard
b. the IBM Installation Toolkit automatically reboots the par
tition
c. press enter when the boot prompt displays to continue the
Linux install
d. installation of the Linux packages then occurs next
21. For Red Hat RHEL installations, a menu provides final config
uration options. Consider,
a. in a testing environment with a private network, initially
disabling the firewall until applications are installed and
configured
b. verify the network IP and DNS settings
22. Consider using a PuTTY terminal window instead of
the VIOS or IVM terminal windows for functionality.
23. Finally, sign into the Linux console.
24. If you did not select the ibmpowerrepo to be installed, you can install the latest IBM Service and Productivity
tools for Linux on Power manually. For example, to configure the repository and install the tools for a virtual
server running RHEL6
31. wget ftp://public.dhe.ibm.com/software/server/POWER/Linux/yum/download/ibmpowerrepo1.1.6
5.ppc.rpm
rpm ivh ibmpowerrepo1.1.65.ppc.rpm
yum install ibmpowermanagedrhel6
NOTE: Requires firewall access to public.dhe.ibm.com
(9.17.248 .112) and linuxpatch.ncsa.uiuc.edu
(141.142.192.67)
NOTE: Requires repo for Linux distribution to be available
for prerequisite software
NOTE: optionally install the Advanced Toolchain compilers
and optimized system libraries using
yum install advancetoolchainat5*
25. If the partition’s Boot mode was configured to stop at SMS, from the IVM GUI change the boot to NOT stop at
SMS as follows
a. open the partition’s properties panel
b. on the General table, change Boot mode to Normal
c. click Ok to save the changes
32. 5.3.6 Configuring a Linux Software RAID Device
RAID configurations can help address VIOS workloads that are disk intensive. Hardware RAID is generally preferred over
software RAID to provide better performance, but hardware RAID requires additional planning since before VIOS is in
stalled RAID configuration occurs.
To configure Linux software RAID using virtual disks, consider the following:
1. If not previously done, create additional virtual disks assigned to independent physical disks in the storage pool.
For details, refer to section 5.3.2.1 Creating Virtual Disks for use with Linux Software RAID on page 20.
2. Activate the virtual server’s partition with all of the virtual disks assigned to it.
3. If multiple virtual disks are used during the initial Linux installation
a. The IBM Installation Toolkit will only allow an install on one disk (no software RAID support)
b. Linux automatically creates a RAID device, which can be displayed using the following Linux com
mand:
i. mdadm detail –scan
ii. the RAID device name should be: /dev/md/0_0
c. Additional virtual disks can be added to the RAID device using the following Linux command:
i. msadd add /dev/md/0_0 /dev/sdd
4. Install does not create a RAID device using only one virtual disk. To create the RAID device for a nonLVM con
figuration, use the mdadm Linux command. For example,
a. mdadm create v /dev/md0 l raid1 n 3 /dev/sdb /dev/sdc /dev/sddwhere /dev/md0 is the RAID device
name
raid1 indicates a RAID1 device
–n 3 indicates there are three devices to include in the RAID, which are then listed
NOTE: To create a RAID using Logical Volume Manager (LVM), refer to Basic Linux LVM Striping.
5. Create a file system on the RAID device, for example:
a. mkfs.ext3 /dev/md0
6. Create a RAID configuration file as follows:
a. mdadm detail scan > /etc/mdadm.conf
7. Create a directory for mounting the RAID device
a. mkdir <raid directory>
8. Configure a mount of the RAID device at boot time
a. Add the following line to /etc/fstab
i. /dev/md0 /<raid directory> ext3 defaults 1 2
9. Mount the RAID device now
a. mount –a
10. To configure Postfix to use the RAID based file system for mail files
a. Use a symbolic link to link the default mail directory to the directory for the RAID based file system.
b. For example to create the symbolic link and assign the mail group to the RAID directory.
i. ln s /raid1/spool/mail /var/spool/mail
ii. chown :mail /raid1/spool/mail
33. 5.4 Configuring OSIS Workloads
If web, mail, file, print, and networking applications were installed earlier using the IBM Installation Toolkit (covered in the
section 5.3.5.2 on page 28), a quick configuration of these workloads can be performed using the IBM Installation Toolkit
Simplified Setup Tool as follows:
1. Use IVM or a terminal window to ensure activation of the virtual server associated with the installed workloads and
Linux is running
2. Open a browser to https://<virtual server IP address>:6060/ to visit the IBM Installation Toolkit Simplified Setup
Tool web page
3. Log on using the Linux root ID and password (set earlier during Linux installation)
4. Click OK
NOTE: If the virtual server is configured with less than 16GB of memory, an Unsupported System warning may be
displayed.
5. Click Yes to continue
6. Click Accept to accept the license
7. If the virtual server has Internet access and updates are avail
able, install the updates
8. Click OK to continue
9. A list of workloads installed earlier by the IBM Installation
Toolkit is displayed
NOTE: Use the IBM Installation Toolkit later to install
workloads after installing Linux. Refer to the documenta
tion available on the IBM Installation Toolkit website.
NOTE: At the time of this writing, using the IBM Installa
tion Toolkit version 5.1 to perform a “full” install of RHEL
6.2 produces errors when configuring some workloads with the Simplified Setup Tool. A workaround is to remove
the “augeaslibs” package:
rpm e augeaslibs
10. After clicking on a workload to configure, update the configuration panels and complete the wizard. The IBM In
stallation Toolkit Simplified Setup Tool updates the required workload configuration files, and provides information
regarding the changes.
5.4.1 Configuring webmin for Administration
webmin provides an easy to use GUI for administering Linux and controlling many services, including MySQL, web, and
mail.
Download the webmin RPM from the webmin site downloads page and install:
yum install webmin* nogpgcheck disablerepo=*
5.4.2 Configuration of Mail Application Server
This section covers the configuration changes used for setting up the Postfix and Dovecot mail applications.
Use the IBM Installation Toolkit Simplified Setup Tool for the initial
configuration as follows
1. For Postfix configuration (refer to the example on the right)
a. Use recommended values in parenthesis for most values
b. Use a fully qualified hostname to help with routing of mail
c. For My Origin, use the virtual server’s hostname for better
identification of the sender
d. Use default values for directories to make Dovecot configu
ration easier. In the case of using Linux software RAID, use
a symbolic link to redirect mail to alternative directory on
the RAID device (covered in the RAID configuration sec
tion of this paper)
NOTE: IBM Installation Toolk Simple Setup Tool updates /etc/postfix/main.cf
34. 2. For Dovecot configuration (refer to the example on the right)
a. For Protocol, the configuration tool supports only imap.
b. The following Mail location value provides indexing in memory
for better performance: mbox:~/mail:INBOX=/var/mail/
%u:INDEX=MEMORY
NOTE: Setup tool updates /etc/dovecot/dovecot.conf
3. For SUSE, the Cyrus Configuration changes are:
a. In /etc/imapd.conf, add the following:
configdirectory: /var/lib/imap
partitiondefault: /var/spool/imap
sievedir: /var/lib/sieve
admins: cyrus root
allowanonymouslogin: no
autocreatequota: 10000
reject8bit: no
quotawarn: 90
timeout: 30
poptimeout: 10
dracinterval: 0
drachost: localhost
sasl_pwcheck_method: saslauthd
lmtp_overquota_perm_failure: no
lmtp_downcase_rcpt: yes
allowplaintext: yes
sasl_mech_list: PLAIN LOGIN
auth_mech: unix
servername: node1
b. In /etc/sysconfig/mail set:
SMTPD_LISTEN_REMOTE="yes"
c. In /etc/postfix/main.cf change mailbox_transport to cyrus to enable it:
mailbox_transport = cyrus
d. After changing configuration files, restart the cyrus and saslauthd:
service cyrus restart
service saslauthd restart
4. After the IBM Installation Toolkit Simplified Setup Tool updates the configuration files, the following changes were
also made:
a. For Postfix
i. In /etc/postfix/main.cf , uncomment the following lines:
header_checks = regexp:/etc/postfix/header_checks
default_privs = nobody
ii. Added these lines:
disable_dns_lookups = yes
smtp_host_lookup = native
smtp_bind_address = 0.0.0.0
NOTE: When using procmail for antispam filtering, add the following line:
mailbox_command = /usr/bin/procmail
iii. After changing configuration files, restart the Postfix:
service postfix restart
b. For Dovecot
i. In /etc/dovecot/conf.d/authdeny.conf.ext, change yes to no in the following line:
deny = no
ii. In /etc/dovecot/conf.d/10auth.conf, uncomment the following line
disable_plaintext_auth = no
iii. Dovecot configuration can be verified using the following:
dovecot –n
iv. After changing configuration files, restart the Dovecot as follows
service dovecot restart
35. 5.4.3 Configuring Postfix Anti-spam
SpamAssassin provides Postfix with antispam mail protection. Using procmail, Postfix passes emails to procmail, which
invokes SpamAssassin.
Configuration of Procmail is made easy using webmin as follows:
1. Visit webmin using this URL: https://<virtual server IP
address>:10000
NOTE: If you cannot connect to webmin then follow these instruc
tions http://www.webmin.com/firewall.html
2. Login in with the Linux root ID and password
3. Click Servers
4. Click Procmail Mail Filter
5. Add an action to invoke SpamAssassin as follows:
a. Click Add action
b. For Delivery mode, select Feed to program
c. Set the program /usr/bin/spamassassin
d. Check Wait for action program to finish, and check result
e. Check Action program is a filter
f. Click Save to save this action
6. Add an action to handle the result, for example to append the spam mail to default mail file (refer to screenshot below)
7. When done, two actions should in the list
5.4.4 Configure Sending Mail for Performance Testing
To enable testing of the throughput of the mail application server for the OSIS solution guides, use the open source utility
smtpsource to send email to the server.
To simulate multiple mail clients sending emails:
smtpsource was run on a separate virtual server specifically to generate a mail workload of sending receiving
emails
smtpsource was run multiple times in the background to simulate multiple users sending mail at the same
time. For example, the following command sends in the background 5000 5k emails over 100 sessions from
user root to user mailtest1 on mail server PowerLinuxMail.example.com:25
o smtpsource s100 l 5120 m 5000 c f root@PowerLinuxMailW.example.com t mailtest1@Power
LinuxMail.rchland.ibm.com PowerLinuxMail.example.com:25 &
5.4.5 Configure Retrieving Mail for Performance Testing
To enable testing of the throughput of the mail server for delivery of email, open source utility fetchmail retrieved email.
To simulate multiple mail clients receiving emails:
Fetchmail was run on a separate virtual server specifically to generate a mail workload of receiving emails
Received email from multiple user mailboxes to ensure a workload across all CPUs. For example, 16 mail
users were used to receive emails when running with 4 CPUs, utilizing all 16 logical CPUs.
fetchmail was run as a daemon to receive emails in the background for each of these mail users
36. Configured fetchmail as follows:
1. fetchmail is provided with RHEL distribution and installed with the following Linux yum command:
yum install fetchmail
2. Create multiple Linux users
3. For each user, create the file ~/.fetchmailrc
4. Modify /home/mailtestx/.fetchmailrc to retrieve emails every 5 seconds using the IMAP protocol for the user mail
testx sequentially from two different mail servers:
set daemon 5
defaults
protocol IMAP
authenticate password
no keep
fetchall
poll powerlinuxmail.rchland.ibm.com username "mailtestx" password "test"
poll powerlinuxmail2.rchland.ibm.com username "mailtestx" password "test"
5. Set the owner of the configuration file using the following command:
sudo u mailtestx chown mailtestx:mailtestx /home/mailtestx/.fetchmailrc
6. Run fetchmail in the background as a daemon for user mailtestx
sudo u mailtestx fetchmail
NOTE: On RHEL 5.7 this command fails with the following error
fetchmail: couldn't timecheck the runcontrol file
fetchmail: lstat: /root/.fetchmailrc: Permission denied
Solution : login to maitestx
7. The following command terminates fetchmail when it is running in the background as a daemon
sudo u mailtestx fetchmail quit
NOTE: fetchmail requires postfix to be running on the local client system to route mail.
NOTE : ON SLES11 SP1, if you get the following error :
miz11 postfix/pipe[2562]: B34206FCC1: to=<test123@miz11.austin.ibm.com>, relay=cyrus, delay=1.3,
delays=0.42/0.03/0/0.88, dsn=5.6.0, status=bounced (data format error. Command output: test123: Mailbox does not
exist )
Consider creating the Mailbox for mailtestx as follows:
miz06:~ # passwd cyrus
Changing password for cyrus.
New Password:
Reenter New Password:
Password changed.
miz06:~ # cyradm user cyrus localhost
Password:
localhost> cm user.mailtest1 #create a mailbox for mailtest1
localhost> lm #verify
user.mailtest1 (HasNoChildren)
localhost> exit
5.5 Monitoring software
Several monitoring tools are available for Linux that run locally on the virtual server to monitor utilization.
mpstat provides CPU utilizations. For example, “mpstat 2 10” displays CPU metrics every 2 seconds for 10
iterations
iostat provides disk utilizations. For example, “iostat z d x 2 10” displays disk utilizations every 2 seconds
for 10 iterations. Utilizations include average disk utilization (last column).
top provides memory utilization.
nmon provides disk and memory utilization. nmon can be installed using the IBM Installation Toolkit.
37. Consider using Ganglia to monitor several virtual servers over the network. Ganglia is an open source monitoring applica
tion that is very useful monitoring resource utilization of
both individual and multiple virtual servers.
For the development of the OSIS solution guides, the Gan
glia configuration collected performance metrics from each
of the virtual application servers. At a centralized collection
point that received the data, a Ganglia web application dis
played the results.
By utilizing Ganglia’s gmetric utility, one can customize Ganglia by adding additional displayable metrics. For OSIS relat
ed metric extensions, refer to the Ganglia Gmetric repository and Ganglia Gmetric Script Repository for open source met
rics for mail, web, file sharing, MySQL, and networking. For example, a Linux script is available for obtaining the Postfix
mail queue size and passing the information to gmetric to add to the database of displayable metrics.
IBM Power customers in highly virtualized environments have used ganglia to ensure efficient system utilization, without
reaching limits,. The University of Pittsburgh Medical Center is an example of using Ganglia with customization to ensure
60 virtual servers utilize system resources appropriately.
The following sections cover the installation and configuration of the Ganglia software.
5.5.1 Configuring Ganglia on Monitored Virtual Servers
When monitoring virtual servers, install and configure the Ganglia monitoring software the same on each virtual server as
follows:
1. Install Ganglia software updated for PowerLinux (download from Ganglia for IBM Power website ). For example, on
RHEL6:
yum install libconfuse* nogpgcheck
yum install ganglialib* nogpgcheck
yum install gangliagmond* nogpgcheck
yum install gangliamod_ibmpower* nogpgcheck
yum install gangliamod_ibmrperf* nogpgcheck
yum install gangliamod_netif* nogpgcheck
2. Update the Ganglia configuration file /etc/ganglia/gmond.conf:
In cluster section, set the cluster name value, for example "PowerLinux".
NOTE: A cluster is a grouping of systems to be monitored. For example, a cluster could be all virtual servers
within a OSIS configuration.
the following may need to updated (if not already)
udp_send_channel {
mcast_join = 239.2.11.71
port = 8649
ttl = 1
}
udp_recv_channel {
mcast_join = 239.2.11.71
port = 8649
bind = 239.2.11.71
}
NOTE: The frequency of collecting Collection Group metrics can be modified by changing collect_every = <fre-
quency in seconds>
3. Start the Ganglia service:
service gmond start
4. To ensure Ganglia starts with each virtual
server restart, use this command:
chkconfig gmond on