SlideShare ist ein Scribd-Unternehmen logo
1 von 57
Quick-and-Easy Deployment 
of a Ceph Storage Cluster 
with SLES 
With a look at SUSE Studio, Manager and Build Service 
Jan Kalcic 
Sales Engineer 
jkalcic@suse.com 
Flavio Castelli 
Senior Software Engineer 
fcastelli@suse.com
2 
Agenda 
Ceph Introduction 
System Provisioning with SLES 
System Provisioning with SUMa
3 
Agenda 
Ceph Introduction 
System Provisioning with SLES 
System Provisioning with SUMa 
SUSE Studio 
SUSE Manager
Ceph Introduction
5 
What is Ceph 
• Open-source software-defined storage 
‒ It delivers object, block, and file storage in one unified system 
• It runs on commodity hardware 
‒ To provide an infinitely scalable Ceph Storage Cluster 
‒ Where nodes communicate with each other to replicate and 
redistribute data dynamically 
• It is based upon RADOS 
‒ Reliable, Autonomic, Distributed Object Store 
‒ Self-healing, self-managing, intelligent storage nodes
6 
Ceph Components 
Monitor 
Object Storage Device (OSD) 
Ceph Metadata Server (MDS) 
Ceph Storage Cluster 
Ceph Clients 
Ceph Block Device (RBD) 
Ceph Object Storage (RGW) 
Ceph Filesystem 
Custom implementation
7 
Ceph Storage Cluster 
• Ceph Monitor 
‒ It maintains a master copy of the cluster map (i.e. cluster 
members, state, changes, and overall health of the cluster) 
• Ceph Object Storage Device (OSD) 
‒ It interacts with a logical disk (e.g. LUN) to store data (i.e. 
handle the read/write operations on the storage disks). 
• Ceph Metadata Server (MDS) 
‒ It provides the Ceph Filesystem service. Purpose is to store 
filesystem metadata (directories, file ownership, access 
modes, etc) in high-availability Ceph Metadata Servers
8 
Architectural Overview 
Ceph Clients Ceph Monitors 
Ceph MDS Ceph OSDs
9 
Architectural Overview 
Ceph Clients Ceph Monitors 
Status Report, Auth 
Cluster Map, Auth 
Data I/O 
Ceph MDS Ceph OSDs
10 
Deployment Overview 
• All Ceph clusters require: 
‒ at least one monitor 
‒ at least as many OSDs as copies of an object stored on the 
cluster 
• Bootstrapping the initial monitor is the first step 
‒ This also sets important criteria for the cluster, (i.e. number of 
replicas for pools, number of placement groups per OSD, 
heartbeat intervals, etc.) 
• Add further Monitors and OSDs to expand the cluster
11 
Monitor Bootstrapping 
• On mon node, create “/etc/ceph/ceph.conf” 
[global] 
fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 
mon initial members = node1 
mon host = 192.168.0.1 
public network = 192.168.0.0/24 
auth cluster required = cephx 
auth service required = cephx 
auth client required = cephx 
osd journal size = 1024 
filestore xattr use omap = true 
osd pool default size = 2 
osd pool default min size = 1 
osd pool default pg num = 333 
osd pool default pgp num = 333 
osd crush chooseleaf type = 1 
Add UUID: `uuidgen` 
Add mon 
Add IP 
... 
• Create a keyring for your cluster and generate a 
monitor secret key. 
ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap 
mon 'allow *'
12 
Monitor Bootstrapping (cont) 
• Generate an administrator keyring, generate a 
client.admin user and add the user to the keyring 
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n 
client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' 
• Add the client.admin key to the ceph.mon.keyring. 
ceph-authtool /tmp/ceph.mon.keyring --import-keyring 
/etc/ceph/ceph.client.admin.keyring 
• Generate a monitor map using the hostname(s), host 
IP address(es) and the FSID. Save it as 
/tmp/monmap: 
‒ monmaptool --create --add node1 192.168.0.1 --fsid a7f64266-0894-4f1e-a635- 
d0aeaca0e993 /tmp/monmap
13 
Monitor Bootstrapping (cont) 
• Create a default data directory (or directories) on the 
monitor host(s). 
sudo mkdir /var/lib/ceph/mon/ceph-node1 
• Populate the monitor daemon(s) with the monitor map 
and keyring. 
ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring 
/tmp/ceph.mon.keyring 
• Start the monitor(s). 
sudo /etc/init.d/ceph start mon.node1 
• Verify that the monitor is running: 
ceph -s
14 
Adding OSDs 
• Once you have your initial monitor(s) running, you 
should add OSDs 
‒ Your cluster cannot reach an active + clean state until you 
have enough OSDs to handle the number of copies of an 
object (e.g., osd pool default size = 2 requires at least two 
OSDs). 
• Ceph provides the ceph-disk utility, which can prepare 
a disk, partition or directory for use with Ceph 
‒ The ceph-disk utility creates the OSD ID by incrementing the 
index. 
‒ ceph-disk will add the new OSD to the CRUSH map under the 
host for you.
15 
Adding OSDs (cont) 
• Prepare the OSD. 
sudo ceph-disk prepare --cluster ceph --cluster-uuid a7f64266-0894-4f1e-a635- 
d0aeaca0e993 --fs-type ext4 /dev/hdd1 
• Activate the OSD: 
sudo ceph-disk activate /dev/hdd1 
• To watch the placement groups peer: 
ceph -w 
• To view the tree, execute the following: 
ceph osd tree
Lab
SUSE Studio
18 
SUSE Studio 
• Web application 
• Makes it easy to create appliances 
• Targets different versions of openSUSE and SLE 
• Supports different output formats: 
‒ ISO, Preload ISO 
‒ USB disk/ hard disk image 
‒ Virtual machines (VMware, Xen, KVM, Hyper-V) 
‒ Cloud (AWS, OpenStack, SUSE Cloud, Azure)
19 
Add packages from OBS project
20 
Use LVM
21 
Add custom files to the appliance
22 
Build a preload ISO
23 
What is a “preload ISO” 
Bootable ISO which: 
1)Asks user permission to wipe the disk. 
2)Erase the contents of the disk. 
3)Partitions the disk (using LVM in our case). 
4)Dumps the contents of the appliance to the disk.
SLES as PXE Boot Server
25 
Introduction 
• A PXE boot server allows you to boot up a system 
over the network instead of a DVD. 
• A key requirement of system provisioning. 
• When used in conjunction with AutoYaST you can 
have a fully automated installation of SUSE Linux 
Enterprise. 
• Components needed: 
‒ DHCP server 
‒ TFTP server 
‒ PXE boot server (syslinux) 
‒ Software repositories
26 
DHCP Server 
• Install, configure and enable the DHCP Server as 
usual. 
• /etc/dhcpd.conf 
subnet 192.168.43.0 netmask 255.255.255.0 { 
range 192.168.43.1 192.168.43.254; 
filename "pxelinux.0"; 
next-server 192.168.43.100; 
} 
• Two parameters: 
‒ next-server - used to specify the host address of the server 
from which the initial boot file is to be loaded. 
‒ filename: used to specify the name of the initial boot file which 
is to be loaded by a client.
27 
TFTP Server 
• Install and enable tftp server. 
• /etc/xinetd.d/tftpd 
service tftp 
{ 
socket_type = dgram 
protocol = udp 
wait = yes 
flags = IPv6 IPv4 
user = root 
server = /usr/sbin/in.tftpd 
server_args = -v -s /srv/tftp 
disable = no 
}
28 
PXE Boot Server 
• Install “syslinux” and configure the PXE boot server 
/srv/tftp/pxelinux.cfg/default 
/srv/tftp/f1 
/srv/tftp/initrd 
/srv/tftp/linux 
/srv/tftp/pxelinux.0 
• /srv/tftp/pxelinux.cfg/default 
# hard disk 
label harddisk 
kernel linux 
append SLX=0x202 
# ceph installation 
label cephosd 
kernel linux 
append initrd=initrd splash=silent showopts  
install=http://192.168.43.100/repo/JeOS-SLES11-SP3-x86_64/CD1  
autoyast=http://192.168.43.100/autoyast/osd.xml 
implicit 0 
display f1 
prompt 1 
timeout 0
29 
PXE Boot Server (cont) 
• /srv/tftp/f1 
harddisk - Boot from Harddisk 
cephosd - Add a new Ceph OSD to the existing Ceph Storage Cluster 
cephmon - TODO Add a new Ceph Mon to the existing Ceph Storage Cluster 
cephmds - TODO Add a new Ceph MDS to the existing Ceph Storage Cluster 
• Copy files “initrd” and “linux” from SLES DVD 
# cp /media/cdrom/boot/i386/loader/initrd /srv/tftpboot/ 
# cp /media/cdrom/boot/i386/loader/linux /srv/tftpboot/ 
• man syslinux
30 
PXE Boot Server (cont) 
• Example for more complex scenarios 
/srv/tftp/pxelinux.cfg/default 
/srv/tftp/f1 
/srv/tftp/initrd-SLES11SP2 
/srv/tftp/linux-SLES11SP2 
/srv/tftp/initrd-SLES11SP3 
/srv/tftp/linux-SLES11SP3 
/srv/tftp/initrd-SLES12 
/srv/tftp/linux-SLES12 
/srv/tftp/pxelinux.0 
___________________ 
# ceph on SLES11 SP2 
label cephosd-sles11sp2 
Copy “initrd” and “linux” from more distributions. 
Unique names are needed! 
kernel linux 
append initrd=initrd splash=silent showopts  
install=http://192.168.43.100/repo/SLES11-SP2-x86_64/CD1  
autoyast=http://192.168.43.100/autoyast/osd.xml 
# ceph on SLES11 SP3 
label cephosd-sles11sp3 
kernel linux 
append initrd=initrd splash=silent showopts  
install=http://192.168.43.100/repo/SLES11-SP3-x86_64/CD1  
autoyast=http://192.168.43.100/autoyast/osd.xml 
# ceph on SLES12 
label cephosd-sles12 
kernel linux 
append initrd=initrd splash=silent showopts  
install=http://192.168.43.100/repo/SLES12-x86_64/CD1  
autoyast=http://192.168.43.100/autoyast/osd.xml 
Create entries in 
“/srv/tftp/pxelinux.cfg/default” 
and then update file “f1” 
accordingly.
31 
Installation Server 
• It used to create and manage the repositories hosted 
on the system and used by clients for network 
installation. 
• It is basically about: 
‒ Configure initial server options (i.e. directory, protocol, alias) 
‒ Import DVD content (or ISO) for each distribution 
• YaST is your friend 
‒ yast2 insserver
LAB
AutoYaST
34 
Introduction 
• AutoYaST is a system for installing one or more SUSE 
Linux systems automatically 
‒ without user intervention 
‒ or when customization is required 
• Installations are performed using an AutoYaST profile 
(XML) with installation and configuration data. 
• Use cases: 
‒ Massive deployment 
‒ Deployment on demand 
‒ Custom installation 
‒ Changing hardware or a wide variety of hardware
35 
Add-on and Partitions 
<add-on> 
<add_on_products config:type="list"> 
<listentry> 
<media_url><![CDATA[http://@MASTER_IP@/repo/SLES11-SP3-x86_64-CEPH]]></media_url> 
<product>SLES11-SP3-x86_64-CEPH</product> 
<product_dir>/</product_dir> 
<ask_on_error config:type="boolean">true</ask_on_error> 
</listentry> 
</add_on_products> 
</add-on> 
<partitioning config:type="list"> 
<drive> 
<initialize config:type="boolean">true</initialize> 
<partitions config:type="list"> 
<partition> 
<create config:type="boolean">true</create> 
<crypt_fs config:type="boolean">false</crypt_fs> 
<filesystem config:type="symbol">xfs</filesystem> 
<format config:type="boolean">true</format> 
<fstopt>defaults</fstopt> 
<loop_fs config:type="boolean">false</loop_fs> 
<lv_name>LVceph</lv_name> 
<mount>/ceph</mount> 
<mountby config:type="symbol">device</mountby> 
<partition_id config:type="integer">131</partition_id> 
<resize config:type="boolean">false</resize> 
<size>max</size> 
</partition>
36 
Networking 
<networking> 
<dhcp_options> 
<dhclient_client_id></dhclient_client_id> 
<dhclient_hostname_option>AUTO</dhclient_hostname_option> 
</dhcp_options> 
<dns> 
<dhcp_hostname config:type="boolean">true</dhcp_hostname> 
<hostname>@HOSTNAME@</hostname> 
<domain>@DOMAIN@</domain> 
<resolv_conf_policy></resolv_conf_policy> 
<write_hostname config:type="boolean">true</write_hostname> 
</dns> 
<interfaces config:type="list"> 
<interface> 
<bootproto>static</bootproto> 
<device>eth0</device> 
<ipaddr>@IPADDRESS@</ipaddr> 
<netmask>@NETMASK@</netmask> 
<startmode>auto</startmode> 
<usercontrol>no</usercontrol> 
</interface> 
</interfaces> 
<managed config:type="boolean">false</managed> 
</networking>
37 
Ask # 1 
<ask-list config:type="list"> 
<ask> 
<title>Basic Host Configuration</title> 
<dialog config:type="integer">0</dialog> 
<element config:type="integer">1</element> 
<pathlist config:type="list"> 
<path>networking,dns,hostname</path> 
</pathlist> 
<question>Full hostname (i.g. linux.site)</question> 
<stage>initial</stage> 
<script> 
<filename>hostname.sh</filename> 
<rerun_on_error config:type="boolean">false</rerun_on_error> 
<environment config:type="boolean">true</environment> 
<source><![CDATA[#!/bin/bash 
echo $VAL > /tmp/answer_hostname 
]]> 
</source> 
<debug config:type="boolean">false</debug> 
<feedback config:type="boolean">true</feedback> 
</script> 
</ask> 
[...]
38 
Ask # 2 
<ask> 
<title>Basic Host Configuration</title> 
<dialog config:type="integer">0</dialog> 
<element config:type="integer">2</element> 
<pathlist config:type="list"> 
<path>networking,interfaces,0,ipaddr</path> 
</pathlist> 
<question>IP Address and CIDR prefix (i.g. 192.168.1.1/24)</question> 
<stage>initial</stage> 
<script> 
<filename>ip.sh</filename> 
<rerun_on_error config:type="boolean">true</rerun_on_error> 
<environment config:type="boolean">true</environment> 
<source><![CDATA[#!/bin/bash 
function check_host() { 
local host=$1 
[ -z "$host" ] && echo "You must provide a valid hostname!" && exit 1 
[ -z "${host//[A-Za-z0-9]/}" ] && echo -e "Is this a valid full hostname? (i.g. linux.site)nCould not find the domain name in "$ 
{host}"" && exit 1 
tmp="$(echo "$host" | awk -F "." '{print $2}')" 
[ -z "$tmp" ] && echo -e "Is this a valid full hostname? (i.g. linux.site)nCould not find the domain name in "${host}"" && exit 1 
tmp="${host//[A-Za-z0-9]/}" 
[ "${#tmp}" -ne 1 ] && echo -e "Is this a valid full hostname? (i.g. linux.site)nA full hostname can contain only one dot" && exit 1 
return 0 
} 
cidr2mask () 
{ 
# Number of args to shift, 255..255, first non-255 byte, zeroes 
set -- $(( 5 - ($1 / 8) )) 255 255 255 255 $(( (255 << (8 - ($1 % 8))) & 255 )) 0 0 0 
[ $1 -gt 1 ] && shift $1 || shift 
echo ${1-0}.${2-0}.${3-0}.${4-0} 
} [...]
39 
Ask # 2 (cont) 
function check_ip() { 
local ip=$1 tmp 
[ -z "$ip" ] && echo "You must provide a valid IP address plus CIDR prefix!" && exit 1 
[ "${#ip}" -lt 9 ] && echo -e "Is this a valid IP address plus CIDR prefix?nYou entered only '${#ip}' chars." && exit 1 
[ -n "${ip//[0-9./]/}" ] && echo -e "Is this a valid IP address plus CIDR prefix?nFound unvalid character: '${ip//[0-9.]/}'." && exit 1 
tmp="$(echo "$ip" | awk -F "/" '{print $2}')" 
[ -z "$tmp" ] && echo -e "The CIDR prefix must be provided too!n(i.g. 192.168.1.1/24)n" && exit 1 
return 0 
} 
HOSTNAME="$(cat /tmp/answer_hostname)" 
check_host "$HOSTNAME" 
check_ip "$VAL" 
DOMAIN=$(echo "$HOSTNAME" | awk -F "." '{print $2}') 
HOSTNAME=$(echo "$HOSTNAME" | awk -F "." '{print $1}') 
IPADDRESS=$(echo "$VAL" | cut -d "/" -f1) 
CIDR=$(echo "$VAL" | awk -F "/" '{print $2}') 
NETMASK=$(cidr2mask "$CIDR") 
sed -e "s/@HOSTNAME@/$HOSTNAME/g"  
-e "s/@DOMAIN@/$DOMAIN/g"  
-e "s/@IPADDRESS@/$IPADDRESS/g"  
-e "s/@NETMASK@/$NETMASK/g"  
-e "/^s*<ask-list/,/ask-list>$/d"  
/tmp/profile/autoinst.xml > /tmp/profile/modified.xml 
]]> 
</source> 
<debug config:type="boolean">false</debug> 
<feedback config:type="boolean">true</feedback> 
</script> 
</ask> 
</ask-list>
40 
Post-script 
<post-scripts config:type="list"> 
<script> 
<debug config:type="boolean">true</debug> 
<feedback config:type="boolean">false</feedback> 
<filename>initialize</filename> 
<interpreter>shell</interpreter> 
<location><![CDATA[]]></location> 
<network_needed config:type="boolean">false</network_needed> 
<notification></notification> 
<source><![CDATA[#!/bin/bash 
cd /etc/ceph 
wget http://@MASTER_IP@/ceph/ceph.conf 
wget http://@MASTER_IP@/ceph/ceph.mon.keyring 
wget http://@MASTER_IP@/ceph/ceph.client.admin.keyring 
# Create a new OSD 
osdid=$(ceph osd create) 
# If the OSD is for a drive other than the OS drive, prepare it for use with Ceph, and mount it to the directory 
you just created: 
ln -s /ceph /var/lib/ceph/osd/ceph-${osdid} 
# Initialize the OSD data directory. 
ceph-osd -i $osdid --mkfs –mkkey 
[...]
41 
Post-script (cont) 
# Register the OSD authentication key. The value of ceph for ceph-{osd-num} in the path is the $cluster-$id. If 
your cluster name differs from ceph, use your cluster name instead.: 
ceph auth add osd.${osdid} osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-${osdid}/keyring 
# WORKAROUND issue "Error EINVAL: entity osd.1 exists but key does not match" 
ceph auth del osd.${osdid} 
ceph auth add osd.${osdid} osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-${osdid}/keyring 
# Add the OSD to the CRUSH map so that the OSD can begin receiving data. 
ceph osd crush add osd.${osdid} 1 root=default 
# WORKAROUND issue with mon not found: 
touch /var/lib/ceph/osd/ceph-${osdid}/sysvinit 
# Starting the OSD 
/etc/init.d/ceph start osd.${osdid} 
# Enabling Ceph at boot 
/sbin/insserv ceph 
# Other 
echo "@MASTER_HOSTS@" >> /etc/hosts 
/usr/bin/zypper ar http://@MASTER_HOSTNAME@/repo/SLES11-SP3-x86_64-CEPH/ SLES11-SP3-x86_64- 
CEPH 
]]></source> 
</script> 
</post-scripts> 
</scripts>
SUSE Manager
43 
Automated Linux systems management 
that enables you to comprehensively 
manage SUSE Linux Enterprise and Red 
Hat Enterprise Linux systems with a 
single, centralized solution across 
physical, virtual and cloud environments. 
● Reduce complexity with automation. 
● Control, standardize and optimize 
converged, virtualized and cloud data 
centers. 
● Reduce risk and avoidable downtime 
through better change control, 
discovery and compliance tracking. 
SUSE Manager
44 
Manage the Full Life-Cycle 
 Gain control 
 Optimize 
 Make room 
for innovation
System Provisioning with SUMa
46 
PXE - steps 
• SUMa does not provide the dhcpd server 
‒ Either use the one available in your network 
‒ Or install and configure it on SUMa 
‒ zypper in dhcp-server 
‒ Edit /etc/sysconfig/dhcpd 
‒ Edit /etc/dhcpd.conf 
• Enable Bare-metal system management in SUMa 
‒ Admin > SUSE Manager Configuration > Bare-metal systems 
> Enable adding to this organization
47 
Channel - Steps 
• Add SLES11 SP3 as SUSE Product 
‒ Admin > Setup Wizard > SUSE Products > add product 
• Add Ceph Channel from OBS 
‒ Channels > Manage Software Channels > create new channel 
• Add Ceph Repository 
‒ Channels > Manage Software Channels > Manage 
Repositories > create new repository 
• Assign Ceph Repository to Channel 
‒ Channels > Manage Software Channels > select channel > 
Repositories > assign repo to channel > trigger sync for repo
48 
Distribution - Steps 
• Copy SLES DVD on SUMa 
‒ scp SLES-11-SP3-DVD-x86_64-GM-DVD1.iso 
root@192.168.43.150: 
• Mount it via a loop device 
‒ mount -o /root/SLES-11-SP3-DVD-x86_64-GM-DVD1.iso 
/media/sles11sp3-x86_64 
• Add the entry in /etc/fstab 
‒ /root/SLES-11-SP3-DVD-x86_64-GM-DVD1.iso 
/media/sles11sp3-x86_64 iso9660 loop 0 0 
• Create new Autoinstallable Distribution 
‒ Systems > Autoinstallation > Distributions > create new 
distribution
49 
Profile – Steps 
• Create new Autoinstallation profile 
‒ Systems > Autoinstallation > Profiles > upload new autoyast 
profile 
‒ Use Autoinstallation snippets to store common blocks of code 
that can be shared across multiple autoinstallation profiles 
• Add variables for registration 
‒ Systems > Autoinstallation > Profiles > select profile > 
Variables 
‒ For example: 
‒ registration_key=1-sles11sp3-x86_64
50 
Profile – Steps (cont) 
• Create profile: 
‒ Systems > Autoinstallation > Profiles > select profile > Details 
> File Contents 
‒ Enclose scripts within “#raw” and “#end raw” 
‒ Add needed snippets 
$SNIPPET('spacewalk/sles_no_signature_checks') 
$SNIPPET('spacewalk/sles_register_script') 
‒ Add add-on for child channels: 
<add-on> 
<add_on_products config:type="list"> 
<listentry> 
<ask_on_error config:type="boolean">true</ask_on_error> 
<media_url>http://$redhat_management_server/ks/dist/child/sles11-sp3-updates-x86_64/sles11sp3-x86_64</media_url> 
<name>SLES11-SP3-updates</name> 
<product>SLES11 SP3 updates</product> 
<product_dir>/</product_dir> 
</listentry> 
<listentry> 
<ask_on_error config:type="boolean">true</ask_on_error> 
<media_url>http://$redhat_management_server/ks/dist/child/ceph-sles11sp3-x86_64/sles11sp3-x86_64</media_url> 
<name>Ceph-SLES11-SP3</name> 
<product>Ceph SLES11 SP3</product> 
<product_dir>/</product_dir> 
</listentry> 
</add_on_products> 
</add-on>
51 
Client Registration – Steps 
• Create Group for Ceph Nodes 
‒ Systems > System Groups > create new group 
• Create Activation Key 
‒ Systems > Activation Keys 
• Add group to the Activation Key 
‒ Systems > Activation Keys > Groups > join to the group 
• Generate generic Bootstrap Script 
‒ Admin > SUSE Manager Configuration > Bootstrap Script > 
update
52 
Client Registration – Steps (cont) 
• Generate Bootstrap Script 
‒ /srv/www/htdocs/pub/bootstrap 
• Create Bootstrap Repositories 
‒ mgr-create-bootstrap-repo 
• Make sure the installation source is available 
‒ YaST > Software > Software Repositories > Add > select DVD 
• Execute Bootstrap Script 
‒ cat bootstrap-EDITED-NAME.sh | ssh root@CLIENT_MACHINE1 /bin/bash
Lab
Thank you. 
54 
Call to action line one 
and call to action line two 
www.calltoaction.com
55
Corporate Headquarters 
Maxfeldstrasse 5 
90409 Nuremberg 
Germany 
+49 911 740 53 0 (Worldwide) 
www.suse.com 
Join us on: 
www.opensuse.org 
56
Unpublished Work of SUSE LLC. All Rights Reserved. 
This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE LLC. 
Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope of 
their assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated, 
abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE. 
Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability. 
General Disclaimer 
This document is not to be construed as a promise by any participating company to develop, deliver, or market a 
product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making 
purchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document, 
and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The 
development, release, and timing of features or functionality described for SUSE products remains at the sole 
discretion of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, at 
any time, without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced in 
this presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All 
third-party trademarks are the property of their respective owners.

Weitere ähnliche Inhalte

Was ist angesagt?

Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High CostsJonathan Long
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookDanny Al-Gaaf
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesKamesh Pemmaraju
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph clusterMirantis
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
 
The container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxThe container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxRobert Starmer
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleJames Saint-Rossy
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage systemItalo Santos
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Community
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)Lars Marowsky-Brée
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightColleen Corrice
 
Tutorial ceph-2
Tutorial ceph-2Tutorial ceph-2
Tutorial ceph-2Tommy Lee
 
Ceph and Openstack in a Nutshell
Ceph and Openstack in a NutshellCeph and Openstack in a Nutshell
Ceph and Openstack in a NutshellKaran Singh
 
Ceph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing GuideCeph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing GuideKaran Singh
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
Cephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkCephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkXiaoxi Chen
 

Was ist angesagt? (20)

Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High Costs
 
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and OutlookLinux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
New Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference ArchitecturesNew Ceph capabilities and Reference Architectures
New Ceph capabilities and Reference Architectures
 
Your 1st Ceph cluster
Your 1st Ceph clusterYour 1st Ceph cluster
Your 1st Ceph cluster
 
librados
libradoslibrados
librados
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
The container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptxThe container revolution, and what it means to operators.pptx
The container revolution, and what it means to operators.pptx
 
Designing for High Performance Ceph at Scale
Designing for High Performance Ceph at ScaleDesigning for High Performance Ceph at Scale
Designing for High Performance Ceph at Scale
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
 
Block Storage For VMs With Ceph
Block Storage For VMs With CephBlock Storage For VMs With Ceph
Block Storage For VMs With Ceph
 
SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)SUSE Storage: Sizing and Performance (Ceph)
SUSE Storage: Sizing and Performance (Ceph)
 
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer SpotlightCeph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer Spotlight
 
Tutorial ceph-2
Tutorial ceph-2Tutorial ceph-2
Tutorial ceph-2
 
Ceph and Openstack in a Nutshell
Ceph and Openstack in a NutshellCeph and Openstack in a Nutshell
Ceph and Openstack in a Nutshell
 
Ceph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing GuideCeph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing Guide
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Cephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmarkCephfs jewel mds performance benchmark
Cephfs jewel mds performance benchmark
 

Andere mochten auch

Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackOpenStack_Online
 
Red Hat Ceph Storage: Past, Present and Future
Red Hat Ceph Storage: Past, Present and FutureRed Hat Ceph Storage: Past, Present and Future
Red Hat Ceph Storage: Past, Present and FutureRed_Hat_Storage
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turkbuildacloud
 
Ceph and cloud stack apr 2014
Ceph and cloud stack   apr 2014Ceph and cloud stack   apr 2014
Ceph and cloud stack apr 2014Ian Colle
 
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarry
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarryCeph, Xen, and CloudStack: Semper Melior-XPUS13 McGarry
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarryThe Linux Foundation
 
Openstack with ceph
Openstack with cephOpenstack with ceph
Openstack with cephIan Colle
 
Ceph Loves OpenStack: Why and How
Ceph Loves OpenStack: Why and HowCeph Loves OpenStack: Why and How
Ceph Loves OpenStack: Why and HowEmma Haruka Iwao
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Patrick McGarry
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Sage Weil
 
What's new in Jewel and Beyond
What's new in Jewel and BeyondWhat's new in Jewel and Beyond
What's new in Jewel and BeyondSage Weil
 
Keeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containersKeeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containersSage Weil
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSHSage Weil
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackSage Weil
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageSage Weil
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red_Hat_Storage
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017 Karan Singh
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortNAVER D2
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephSage Weil
 
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05Lenz Grimmer
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDBSage Weil
 

Andere mochten auch (20)

Introduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStackIntroduction into Ceph storage for OpenStack
Introduction into Ceph storage for OpenStack
 
Red Hat Ceph Storage: Past, Present and Future
Red Hat Ceph Storage: Past, Present and FutureRed Hat Ceph Storage: Past, Present and Future
Red Hat Ceph Storage: Past, Present and Future
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
Ceph and cloud stack apr 2014
Ceph and cloud stack   apr 2014Ceph and cloud stack   apr 2014
Ceph and cloud stack apr 2014
 
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarry
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarryCeph, Xen, and CloudStack: Semper Melior-XPUS13 McGarry
Ceph, Xen, and CloudStack: Semper Melior-XPUS13 McGarry
 
Openstack with ceph
Openstack with cephOpenstack with ceph
Openstack with ceph
 
Ceph Loves OpenStack: Why and How
Ceph Loves OpenStack: Why and HowCeph Loves OpenStack: Why and How
Ceph Loves OpenStack: Why and How
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
 
Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)Distributed Storage and Compute With Ceph's librados (Vault 2015)
Distributed Storage and Compute With Ceph's librados (Vault 2015)
 
What's new in Jewel and Beyond
What's new in Jewel and BeyondWhat's new in Jewel and Beyond
What's new in Jewel and Beyond
 
Keeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containersKeeping OpenStack storage trendy with Ceph and containers
Keeping OpenStack storage trendy with Ceph and containers
 
A crash course in CRUSH
A crash course in CRUSHA crash course in CRUSH
A crash course in CRUSH
 
The State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStackThe State of Ceph, Manila, and Containers in OpenStack
The State of Ceph, Manila, and Containers in OpenStack
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
 
Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
ceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-shortceph optimization on ssd ilsoo byun-short
ceph optimization on ssd ilsoo byun-short
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
 
Ceph and RocksDB
Ceph and RocksDBCeph and RocksDB
Ceph and RocksDB
 

Ähnlich wie Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES

Baylisa - Dive Into OpenStack
Baylisa - Dive Into OpenStackBaylisa - Dive Into OpenStack
Baylisa - Dive Into OpenStackJesse Andrews
 
Simplifying Ceph Management with Virtual Storage Manager (VSM)
Simplifying Ceph Management with Virtual Storage Manager (VSM)Simplifying Ceph Management with Virtual Storage Manager (VSM)
Simplifying Ceph Management with Virtual Storage Manager (VSM)Ceph Community
 
ONOS SDN Controller - Clustering Tests & Experiments
ONOS SDN Controller - Clustering Tests & Experiments ONOS SDN Controller - Clustering Tests & Experiments
ONOS SDN Controller - Clustering Tests & Experiments Eueung Mulyana
 
Rmll Virtualization As Is Tool 20090707 V1.0
Rmll Virtualization As Is Tool 20090707 V1.0Rmll Virtualization As Is Tool 20090707 V1.0
Rmll Virtualization As Is Tool 20090707 V1.0guest72e8c1
 
Virtualization and automation of library software/machines + Puppet
Virtualization and automation of library software/machines + PuppetVirtualization and automation of library software/machines + Puppet
Virtualization and automation of library software/machines + PuppetOmar Reygaert
 
Deploying datacenters with Puppet - PuppetCamp Europe 2010
Deploying datacenters with Puppet - PuppetCamp Europe 2010Deploying datacenters with Puppet - PuppetCamp Europe 2010
Deploying datacenters with Puppet - PuppetCamp Europe 2010Puppet
 
Webinar - Getting Started With Ceph
Webinar - Getting Started With CephWebinar - Getting Started With Ceph
Webinar - Getting Started With CephCeph Community
 
NFD9 - Matt Peterson, Data Center Operations
NFD9 - Matt Peterson, Data Center OperationsNFD9 - Matt Peterson, Data Center Operations
NFD9 - Matt Peterson, Data Center OperationsCumulus Networks
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawnGábor Nyers
 
Practical Tips for Novell Cluster Services
Practical Tips for Novell Cluster ServicesPractical Tips for Novell Cluster Services
Practical Tips for Novell Cluster ServicesNovell
 
3. configuring a compute node for nfv
3. configuring a compute node for nfv3. configuring a compute node for nfv
3. configuring a compute node for nfvvideos
 
OpenNebula 5.4 Hands-on Tutorial
OpenNebula 5.4 Hands-on TutorialOpenNebula 5.4 Hands-on Tutorial
OpenNebula 5.4 Hands-on TutorialOpenNebula Project
 
Qemu device prototyping
Qemu device prototypingQemu device prototyping
Qemu device prototypingYan Vugenfirer
 
Guide to clone_sles_instances
Guide to clone_sles_instancesGuide to clone_sles_instances
Guide to clone_sles_instancesSatheesh Thomas
 
Minimal OpenStack LinuxCon NA 2015
Minimal OpenStack LinuxCon NA 2015Minimal OpenStack LinuxCon NA 2015
Minimal OpenStack LinuxCon NA 2015Sean Dague
 
Network Automation Tools
Network Automation ToolsNetwork Automation Tools
Network Automation ToolsEdwin Beekman
 
IT Automation with Ansible
IT Automation with AnsibleIT Automation with Ansible
IT Automation with AnsibleRayed Alrashed
 

Ähnlich wie Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES (20)

Baylisa - Dive Into OpenStack
Baylisa - Dive Into OpenStackBaylisa - Dive Into OpenStack
Baylisa - Dive Into OpenStack
 
Simplifying Ceph Management with Virtual Storage Manager (VSM)
Simplifying Ceph Management with Virtual Storage Manager (VSM)Simplifying Ceph Management with Virtual Storage Manager (VSM)
Simplifying Ceph Management with Virtual Storage Manager (VSM)
 
ONOS SDN Controller - Clustering Tests & Experiments
ONOS SDN Controller - Clustering Tests & Experiments ONOS SDN Controller - Clustering Tests & Experiments
ONOS SDN Controller - Clustering Tests & Experiments
 
RMLL / LSM 2009
RMLL / LSM 2009RMLL / LSM 2009
RMLL / LSM 2009
 
Rmll Virtualization As Is Tool 20090707 V1.0
Rmll Virtualization As Is Tool 20090707 V1.0Rmll Virtualization As Is Tool 20090707 V1.0
Rmll Virtualization As Is Tool 20090707 V1.0
 
Virtualization and automation of library software/machines + Puppet
Virtualization and automation of library software/machines + PuppetVirtualization and automation of library software/machines + Puppet
Virtualization and automation of library software/machines + Puppet
 
LSA2 - 02 Namespaces
LSA2 - 02  NamespacesLSA2 - 02  Namespaces
LSA2 - 02 Namespaces
 
Deploying datacenters with Puppet - PuppetCamp Europe 2010
Deploying datacenters with Puppet - PuppetCamp Europe 2010Deploying datacenters with Puppet - PuppetCamp Europe 2010
Deploying datacenters with Puppet - PuppetCamp Europe 2010
 
Webinar - Getting Started With Ceph
Webinar - Getting Started With CephWebinar - Getting Started With Ceph
Webinar - Getting Started With Ceph
 
NFD9 - Matt Peterson, Data Center Operations
NFD9 - Matt Peterson, Data Center OperationsNFD9 - Matt Peterson, Data Center Operations
NFD9 - Matt Peterson, Data Center Operations
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawn
 
Practical Tips for Novell Cluster Services
Practical Tips for Novell Cluster ServicesPractical Tips for Novell Cluster Services
Practical Tips for Novell Cluster Services
 
3. configuring a compute node for nfv
3. configuring a compute node for nfv3. configuring a compute node for nfv
3. configuring a compute node for nfv
 
OpenNebula 5.4 Hands-on Tutorial
OpenNebula 5.4 Hands-on TutorialOpenNebula 5.4 Hands-on Tutorial
OpenNebula 5.4 Hands-on Tutorial
 
Qemu device prototyping
Qemu device prototypingQemu device prototyping
Qemu device prototyping
 
Guide to clone_sles_instances
Guide to clone_sles_instancesGuide to clone_sles_instances
Guide to clone_sles_instances
 
Nim
NimNim
Nim
 
Minimal OpenStack LinuxCon NA 2015
Minimal OpenStack LinuxCon NA 2015Minimal OpenStack LinuxCon NA 2015
Minimal OpenStack LinuxCon NA 2015
 
Network Automation Tools
Network Automation ToolsNetwork Automation Tools
Network Automation Tools
 
IT Automation with Ansible
IT Automation with AnsibleIT Automation with Ansible
IT Automation with Ansible
 

Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES

  • 1. Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES With a look at SUSE Studio, Manager and Build Service Jan Kalcic Sales Engineer jkalcic@suse.com Flavio Castelli Senior Software Engineer fcastelli@suse.com
  • 2. 2 Agenda Ceph Introduction System Provisioning with SLES System Provisioning with SUMa
  • 3. 3 Agenda Ceph Introduction System Provisioning with SLES System Provisioning with SUMa SUSE Studio SUSE Manager
  • 5. 5 What is Ceph • Open-source software-defined storage ‒ It delivers object, block, and file storage in one unified system • It runs on commodity hardware ‒ To provide an infinitely scalable Ceph Storage Cluster ‒ Where nodes communicate with each other to replicate and redistribute data dynamically • It is based upon RADOS ‒ Reliable, Autonomic, Distributed Object Store ‒ Self-healing, self-managing, intelligent storage nodes
  • 6. 6 Ceph Components Monitor Object Storage Device (OSD) Ceph Metadata Server (MDS) Ceph Storage Cluster Ceph Clients Ceph Block Device (RBD) Ceph Object Storage (RGW) Ceph Filesystem Custom implementation
  • 7. 7 Ceph Storage Cluster • Ceph Monitor ‒ It maintains a master copy of the cluster map (i.e. cluster members, state, changes, and overall health of the cluster) • Ceph Object Storage Device (OSD) ‒ It interacts with a logical disk (e.g. LUN) to store data (i.e. handle the read/write operations on the storage disks). • Ceph Metadata Server (MDS) ‒ It provides the Ceph Filesystem service. Purpose is to store filesystem metadata (directories, file ownership, access modes, etc) in high-availability Ceph Metadata Servers
  • 8. 8 Architectural Overview Ceph Clients Ceph Monitors Ceph MDS Ceph OSDs
  • 9. 9 Architectural Overview Ceph Clients Ceph Monitors Status Report, Auth Cluster Map, Auth Data I/O Ceph MDS Ceph OSDs
  • 10. 10 Deployment Overview • All Ceph clusters require: ‒ at least one monitor ‒ at least as many OSDs as copies of an object stored on the cluster • Bootstrapping the initial monitor is the first step ‒ This also sets important criteria for the cluster, (i.e. number of replicas for pools, number of placement groups per OSD, heartbeat intervals, etc.) • Add further Monitors and OSDs to expand the cluster
  • 11. 11 Monitor Bootstrapping • On mon node, create “/etc/ceph/ceph.conf” [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 mon initial members = node1 mon host = 192.168.0.1 public network = 192.168.0.0/24 auth cluster required = cephx auth service required = cephx auth client required = cephx osd journal size = 1024 filestore xattr use omap = true osd pool default size = 2 osd pool default min size = 1 osd pool default pg num = 333 osd pool default pgp num = 333 osd crush chooseleaf type = 1 Add UUID: `uuidgen` Add mon Add IP ... • Create a keyring for your cluster and generate a monitor secret key. ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
  • 12. 12 Monitor Bootstrapping (cont) • Generate an administrator keyring, generate a client.admin user and add the user to the keyring ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow' • Add the client.admin key to the ceph.mon.keyring. ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring • Generate a monitor map using the hostname(s), host IP address(es) and the FSID. Save it as /tmp/monmap: ‒ monmaptool --create --add node1 192.168.0.1 --fsid a7f64266-0894-4f1e-a635- d0aeaca0e993 /tmp/monmap
  • 13. 13 Monitor Bootstrapping (cont) • Create a default data directory (or directories) on the monitor host(s). sudo mkdir /var/lib/ceph/mon/ceph-node1 • Populate the monitor daemon(s) with the monitor map and keyring. ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring • Start the monitor(s). sudo /etc/init.d/ceph start mon.node1 • Verify that the monitor is running: ceph -s
  • 14. 14 Adding OSDs • Once you have your initial monitor(s) running, you should add OSDs ‒ Your cluster cannot reach an active + clean state until you have enough OSDs to handle the number of copies of an object (e.g., osd pool default size = 2 requires at least two OSDs). • Ceph provides the ceph-disk utility, which can prepare a disk, partition or directory for use with Ceph ‒ The ceph-disk utility creates the OSD ID by incrementing the index. ‒ ceph-disk will add the new OSD to the CRUSH map under the host for you.
  • 15. 15 Adding OSDs (cont) • Prepare the OSD. sudo ceph-disk prepare --cluster ceph --cluster-uuid a7f64266-0894-4f1e-a635- d0aeaca0e993 --fs-type ext4 /dev/hdd1 • Activate the OSD: sudo ceph-disk activate /dev/hdd1 • To watch the placement groups peer: ceph -w • To view the tree, execute the following: ceph osd tree
  • 16. Lab
  • 18. 18 SUSE Studio • Web application • Makes it easy to create appliances • Targets different versions of openSUSE and SLE • Supports different output formats: ‒ ISO, Preload ISO ‒ USB disk/ hard disk image ‒ Virtual machines (VMware, Xen, KVM, Hyper-V) ‒ Cloud (AWS, OpenStack, SUSE Cloud, Azure)
  • 19. 19 Add packages from OBS project
  • 21. 21 Add custom files to the appliance
  • 22. 22 Build a preload ISO
  • 23. 23 What is a “preload ISO” Bootable ISO which: 1)Asks user permission to wipe the disk. 2)Erase the contents of the disk. 3)Partitions the disk (using LVM in our case). 4)Dumps the contents of the appliance to the disk.
  • 24. SLES as PXE Boot Server
  • 25. 25 Introduction • A PXE boot server allows you to boot up a system over the network instead of a DVD. • A key requirement of system provisioning. • When used in conjunction with AutoYaST you can have a fully automated installation of SUSE Linux Enterprise. • Components needed: ‒ DHCP server ‒ TFTP server ‒ PXE boot server (syslinux) ‒ Software repositories
  • 26. 26 DHCP Server • Install, configure and enable the DHCP Server as usual. • /etc/dhcpd.conf subnet 192.168.43.0 netmask 255.255.255.0 { range 192.168.43.1 192.168.43.254; filename "pxelinux.0"; next-server 192.168.43.100; } • Two parameters: ‒ next-server - used to specify the host address of the server from which the initial boot file is to be loaded. ‒ filename: used to specify the name of the initial boot file which is to be loaded by a client.
  • 27. 27 TFTP Server • Install and enable tftp server. • /etc/xinetd.d/tftpd service tftp { socket_type = dgram protocol = udp wait = yes flags = IPv6 IPv4 user = root server = /usr/sbin/in.tftpd server_args = -v -s /srv/tftp disable = no }
  • 28. 28 PXE Boot Server • Install “syslinux” and configure the PXE boot server /srv/tftp/pxelinux.cfg/default /srv/tftp/f1 /srv/tftp/initrd /srv/tftp/linux /srv/tftp/pxelinux.0 • /srv/tftp/pxelinux.cfg/default # hard disk label harddisk kernel linux append SLX=0x202 # ceph installation label cephosd kernel linux append initrd=initrd splash=silent showopts install=http://192.168.43.100/repo/JeOS-SLES11-SP3-x86_64/CD1 autoyast=http://192.168.43.100/autoyast/osd.xml implicit 0 display f1 prompt 1 timeout 0
  • 29. 29 PXE Boot Server (cont) • /srv/tftp/f1 harddisk - Boot from Harddisk cephosd - Add a new Ceph OSD to the existing Ceph Storage Cluster cephmon - TODO Add a new Ceph Mon to the existing Ceph Storage Cluster cephmds - TODO Add a new Ceph MDS to the existing Ceph Storage Cluster • Copy files “initrd” and “linux” from SLES DVD # cp /media/cdrom/boot/i386/loader/initrd /srv/tftpboot/ # cp /media/cdrom/boot/i386/loader/linux /srv/tftpboot/ • man syslinux
  • 30. 30 PXE Boot Server (cont) • Example for more complex scenarios /srv/tftp/pxelinux.cfg/default /srv/tftp/f1 /srv/tftp/initrd-SLES11SP2 /srv/tftp/linux-SLES11SP2 /srv/tftp/initrd-SLES11SP3 /srv/tftp/linux-SLES11SP3 /srv/tftp/initrd-SLES12 /srv/tftp/linux-SLES12 /srv/tftp/pxelinux.0 ___________________ # ceph on SLES11 SP2 label cephosd-sles11sp2 Copy “initrd” and “linux” from more distributions. Unique names are needed! kernel linux append initrd=initrd splash=silent showopts install=http://192.168.43.100/repo/SLES11-SP2-x86_64/CD1 autoyast=http://192.168.43.100/autoyast/osd.xml # ceph on SLES11 SP3 label cephosd-sles11sp3 kernel linux append initrd=initrd splash=silent showopts install=http://192.168.43.100/repo/SLES11-SP3-x86_64/CD1 autoyast=http://192.168.43.100/autoyast/osd.xml # ceph on SLES12 label cephosd-sles12 kernel linux append initrd=initrd splash=silent showopts install=http://192.168.43.100/repo/SLES12-x86_64/CD1 autoyast=http://192.168.43.100/autoyast/osd.xml Create entries in “/srv/tftp/pxelinux.cfg/default” and then update file “f1” accordingly.
  • 31. 31 Installation Server • It used to create and manage the repositories hosted on the system and used by clients for network installation. • It is basically about: ‒ Configure initial server options (i.e. directory, protocol, alias) ‒ Import DVD content (or ISO) for each distribution • YaST is your friend ‒ yast2 insserver
  • 32. LAB
  • 34. 34 Introduction • AutoYaST is a system for installing one or more SUSE Linux systems automatically ‒ without user intervention ‒ or when customization is required • Installations are performed using an AutoYaST profile (XML) with installation and configuration data. • Use cases: ‒ Massive deployment ‒ Deployment on demand ‒ Custom installation ‒ Changing hardware or a wide variety of hardware
  • 35. 35 Add-on and Partitions <add-on> <add_on_products config:type="list"> <listentry> <media_url><![CDATA[http://@MASTER_IP@/repo/SLES11-SP3-x86_64-CEPH]]></media_url> <product>SLES11-SP3-x86_64-CEPH</product> <product_dir>/</product_dir> <ask_on_error config:type="boolean">true</ask_on_error> </listentry> </add_on_products> </add-on> <partitioning config:type="list"> <drive> <initialize config:type="boolean">true</initialize> <partitions config:type="list"> <partition> <create config:type="boolean">true</create> <crypt_fs config:type="boolean">false</crypt_fs> <filesystem config:type="symbol">xfs</filesystem> <format config:type="boolean">true</format> <fstopt>defaults</fstopt> <loop_fs config:type="boolean">false</loop_fs> <lv_name>LVceph</lv_name> <mount>/ceph</mount> <mountby config:type="symbol">device</mountby> <partition_id config:type="integer">131</partition_id> <resize config:type="boolean">false</resize> <size>max</size> </partition>
  • 36. 36 Networking <networking> <dhcp_options> <dhclient_client_id></dhclient_client_id> <dhclient_hostname_option>AUTO</dhclient_hostname_option> </dhcp_options> <dns> <dhcp_hostname config:type="boolean">true</dhcp_hostname> <hostname>@HOSTNAME@</hostname> <domain>@DOMAIN@</domain> <resolv_conf_policy></resolv_conf_policy> <write_hostname config:type="boolean">true</write_hostname> </dns> <interfaces config:type="list"> <interface> <bootproto>static</bootproto> <device>eth0</device> <ipaddr>@IPADDRESS@</ipaddr> <netmask>@NETMASK@</netmask> <startmode>auto</startmode> <usercontrol>no</usercontrol> </interface> </interfaces> <managed config:type="boolean">false</managed> </networking>
  • 37. 37 Ask # 1 <ask-list config:type="list"> <ask> <title>Basic Host Configuration</title> <dialog config:type="integer">0</dialog> <element config:type="integer">1</element> <pathlist config:type="list"> <path>networking,dns,hostname</path> </pathlist> <question>Full hostname (i.g. linux.site)</question> <stage>initial</stage> <script> <filename>hostname.sh</filename> <rerun_on_error config:type="boolean">false</rerun_on_error> <environment config:type="boolean">true</environment> <source><![CDATA[#!/bin/bash echo $VAL > /tmp/answer_hostname ]]> </source> <debug config:type="boolean">false</debug> <feedback config:type="boolean">true</feedback> </script> </ask> [...]
  • 38. 38 Ask # 2 <ask> <title>Basic Host Configuration</title> <dialog config:type="integer">0</dialog> <element config:type="integer">2</element> <pathlist config:type="list"> <path>networking,interfaces,0,ipaddr</path> </pathlist> <question>IP Address and CIDR prefix (i.g. 192.168.1.1/24)</question> <stage>initial</stage> <script> <filename>ip.sh</filename> <rerun_on_error config:type="boolean">true</rerun_on_error> <environment config:type="boolean">true</environment> <source><![CDATA[#!/bin/bash function check_host() { local host=$1 [ -z "$host" ] && echo "You must provide a valid hostname!" && exit 1 [ -z "${host//[A-Za-z0-9]/}" ] && echo -e "Is this a valid full hostname? (i.g. linux.site)nCould not find the domain name in "$ {host}"" && exit 1 tmp="$(echo "$host" | awk -F "." '{print $2}')" [ -z "$tmp" ] && echo -e "Is this a valid full hostname? (i.g. linux.site)nCould not find the domain name in "${host}"" && exit 1 tmp="${host//[A-Za-z0-9]/}" [ "${#tmp}" -ne 1 ] && echo -e "Is this a valid full hostname? (i.g. linux.site)nA full hostname can contain only one dot" && exit 1 return 0 } cidr2mask () { # Number of args to shift, 255..255, first non-255 byte, zeroes set -- $(( 5 - ($1 / 8) )) 255 255 255 255 $(( (255 << (8 - ($1 % 8))) & 255 )) 0 0 0 [ $1 -gt 1 ] && shift $1 || shift echo ${1-0}.${2-0}.${3-0}.${4-0} } [...]
  • 39. 39 Ask # 2 (cont) function check_ip() { local ip=$1 tmp [ -z "$ip" ] && echo "You must provide a valid IP address plus CIDR prefix!" && exit 1 [ "${#ip}" -lt 9 ] && echo -e "Is this a valid IP address plus CIDR prefix?nYou entered only '${#ip}' chars." && exit 1 [ -n "${ip//[0-9./]/}" ] && echo -e "Is this a valid IP address plus CIDR prefix?nFound unvalid character: '${ip//[0-9.]/}'." && exit 1 tmp="$(echo "$ip" | awk -F "/" '{print $2}')" [ -z "$tmp" ] && echo -e "The CIDR prefix must be provided too!n(i.g. 192.168.1.1/24)n" && exit 1 return 0 } HOSTNAME="$(cat /tmp/answer_hostname)" check_host "$HOSTNAME" check_ip "$VAL" DOMAIN=$(echo "$HOSTNAME" | awk -F "." '{print $2}') HOSTNAME=$(echo "$HOSTNAME" | awk -F "." '{print $1}') IPADDRESS=$(echo "$VAL" | cut -d "/" -f1) CIDR=$(echo "$VAL" | awk -F "/" '{print $2}') NETMASK=$(cidr2mask "$CIDR") sed -e "s/@HOSTNAME@/$HOSTNAME/g" -e "s/@DOMAIN@/$DOMAIN/g" -e "s/@IPADDRESS@/$IPADDRESS/g" -e "s/@NETMASK@/$NETMASK/g" -e "/^s*<ask-list/,/ask-list>$/d" /tmp/profile/autoinst.xml > /tmp/profile/modified.xml ]]> </source> <debug config:type="boolean">false</debug> <feedback config:type="boolean">true</feedback> </script> </ask> </ask-list>
  • 40. 40 Post-script <post-scripts config:type="list"> <script> <debug config:type="boolean">true</debug> <feedback config:type="boolean">false</feedback> <filename>initialize</filename> <interpreter>shell</interpreter> <location><![CDATA[]]></location> <network_needed config:type="boolean">false</network_needed> <notification></notification> <source><![CDATA[#!/bin/bash cd /etc/ceph wget http://@MASTER_IP@/ceph/ceph.conf wget http://@MASTER_IP@/ceph/ceph.mon.keyring wget http://@MASTER_IP@/ceph/ceph.client.admin.keyring # Create a new OSD osdid=$(ceph osd create) # If the OSD is for a drive other than the OS drive, prepare it for use with Ceph, and mount it to the directory you just created: ln -s /ceph /var/lib/ceph/osd/ceph-${osdid} # Initialize the OSD data directory. ceph-osd -i $osdid --mkfs –mkkey [...]
  • 41. 41 Post-script (cont) # Register the OSD authentication key. The value of ceph for ceph-{osd-num} in the path is the $cluster-$id. If your cluster name differs from ceph, use your cluster name instead.: ceph auth add osd.${osdid} osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-${osdid}/keyring # WORKAROUND issue "Error EINVAL: entity osd.1 exists but key does not match" ceph auth del osd.${osdid} ceph auth add osd.${osdid} osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-${osdid}/keyring # Add the OSD to the CRUSH map so that the OSD can begin receiving data. ceph osd crush add osd.${osdid} 1 root=default # WORKAROUND issue with mon not found: touch /var/lib/ceph/osd/ceph-${osdid}/sysvinit # Starting the OSD /etc/init.d/ceph start osd.${osdid} # Enabling Ceph at boot /sbin/insserv ceph # Other echo "@MASTER_HOSTS@" >> /etc/hosts /usr/bin/zypper ar http://@MASTER_HOSTNAME@/repo/SLES11-SP3-x86_64-CEPH/ SLES11-SP3-x86_64- CEPH ]]></source> </script> </post-scripts> </scripts>
  • 43. 43 Automated Linux systems management that enables you to comprehensively manage SUSE Linux Enterprise and Red Hat Enterprise Linux systems with a single, centralized solution across physical, virtual and cloud environments. ● Reduce complexity with automation. ● Control, standardize and optimize converged, virtualized and cloud data centers. ● Reduce risk and avoidable downtime through better change control, discovery and compliance tracking. SUSE Manager
  • 44. 44 Manage the Full Life-Cycle  Gain control  Optimize  Make room for innovation
  • 46. 46 PXE - steps • SUMa does not provide the dhcpd server ‒ Either use the one available in your network ‒ Or install and configure it on SUMa ‒ zypper in dhcp-server ‒ Edit /etc/sysconfig/dhcpd ‒ Edit /etc/dhcpd.conf • Enable Bare-metal system management in SUMa ‒ Admin > SUSE Manager Configuration > Bare-metal systems > Enable adding to this organization
  • 47. 47 Channel - Steps • Add SLES11 SP3 as SUSE Product ‒ Admin > Setup Wizard > SUSE Products > add product • Add Ceph Channel from OBS ‒ Channels > Manage Software Channels > create new channel • Add Ceph Repository ‒ Channels > Manage Software Channels > Manage Repositories > create new repository • Assign Ceph Repository to Channel ‒ Channels > Manage Software Channels > select channel > Repositories > assign repo to channel > trigger sync for repo
  • 48. 48 Distribution - Steps • Copy SLES DVD on SUMa ‒ scp SLES-11-SP3-DVD-x86_64-GM-DVD1.iso root@192.168.43.150: • Mount it via a loop device ‒ mount -o /root/SLES-11-SP3-DVD-x86_64-GM-DVD1.iso /media/sles11sp3-x86_64 • Add the entry in /etc/fstab ‒ /root/SLES-11-SP3-DVD-x86_64-GM-DVD1.iso /media/sles11sp3-x86_64 iso9660 loop 0 0 • Create new Autoinstallable Distribution ‒ Systems > Autoinstallation > Distributions > create new distribution
  • 49. 49 Profile – Steps • Create new Autoinstallation profile ‒ Systems > Autoinstallation > Profiles > upload new autoyast profile ‒ Use Autoinstallation snippets to store common blocks of code that can be shared across multiple autoinstallation profiles • Add variables for registration ‒ Systems > Autoinstallation > Profiles > select profile > Variables ‒ For example: ‒ registration_key=1-sles11sp3-x86_64
  • 50. 50 Profile – Steps (cont) • Create profile: ‒ Systems > Autoinstallation > Profiles > select profile > Details > File Contents ‒ Enclose scripts within “#raw” and “#end raw” ‒ Add needed snippets $SNIPPET('spacewalk/sles_no_signature_checks') $SNIPPET('spacewalk/sles_register_script') ‒ Add add-on for child channels: <add-on> <add_on_products config:type="list"> <listentry> <ask_on_error config:type="boolean">true</ask_on_error> <media_url>http://$redhat_management_server/ks/dist/child/sles11-sp3-updates-x86_64/sles11sp3-x86_64</media_url> <name>SLES11-SP3-updates</name> <product>SLES11 SP3 updates</product> <product_dir>/</product_dir> </listentry> <listentry> <ask_on_error config:type="boolean">true</ask_on_error> <media_url>http://$redhat_management_server/ks/dist/child/ceph-sles11sp3-x86_64/sles11sp3-x86_64</media_url> <name>Ceph-SLES11-SP3</name> <product>Ceph SLES11 SP3</product> <product_dir>/</product_dir> </listentry> </add_on_products> </add-on>
  • 51. 51 Client Registration – Steps • Create Group for Ceph Nodes ‒ Systems > System Groups > create new group • Create Activation Key ‒ Systems > Activation Keys • Add group to the Activation Key ‒ Systems > Activation Keys > Groups > join to the group • Generate generic Bootstrap Script ‒ Admin > SUSE Manager Configuration > Bootstrap Script > update
  • 52. 52 Client Registration – Steps (cont) • Generate Bootstrap Script ‒ /srv/www/htdocs/pub/bootstrap • Create Bootstrap Repositories ‒ mgr-create-bootstrap-repo • Make sure the installation source is available ‒ YaST > Software > Software Repositories > Add > select DVD • Execute Bootstrap Script ‒ cat bootstrap-EDITED-NAME.sh | ssh root@CLIENT_MACHINE1 /bin/bash
  • 53. Lab
  • 54. Thank you. 54 Call to action line one and call to action line two www.calltoaction.com
  • 55. 55
  • 56. Corporate Headquarters Maxfeldstrasse 5 90409 Nuremberg Germany +49 911 740 53 0 (Worldwide) www.suse.com Join us on: www.opensuse.org 56
  • 57. Unpublished Work of SUSE LLC. All Rights Reserved. This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE LLC. Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope of their assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated, abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE. Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability. General Disclaimer This document is not to be construed as a promise by any participating company to develop, deliver, or market a product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The development, release, and timing of features or functionality described for SUSE products remains at the sole discretion of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, at any time, without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced in this presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All third-party trademarks are the property of their respective owners.

Hinweis der Redaktion

  1. Good afternoon everybody and thanks for being here. Let me introduce myself first of all, I am Jan Kalcic, I have joined Novell SUSE in 2004 as Technical Support Engineer, in the last period I have also worked as Professional Services Engineer or Consultant, and very recently I moved to the role of Sales Engineer.
  2. This is the agenda for the session, we are going to talk about several topics, not only one, not only ceph. In fact, this is a kind of multifunction session, where for sure Ceph is the central topic, but we will also talk about system provisioning with SLES and SUSE Manager as a generic approach, which can work very well also in the context of a Ceph cluster deployment, but also other environments ...
  3. Other than that, we are going to have also a quick look at SUSE Studio and SUSE Manager as well. We will see later how they fit in this discussion.
  4. So, lets start with a Ceph introduction. But please be aware this is not a deep dive session, but more an introduction to understand the main components and to get and an idea of the deployment process. There are many things to say about this technology, and I suggest to follow the other sessions from the storage guys, or you can also go to the booth to talk directly with them.
  5. So, what is Ceph first of all? It is an Open Source software-defined storage solution, which is basically about, in a very simple way, enterprise class storage that uses standard hardware, or commodity hardware, or widely available hardware, with all the important storage functions like replication, distribution, thin provisioning, etc... So eventually, Ceph is a very good alternative to the expensive proprietary storage solutions. And it is based on RADOS, you are going to hear this word more times, which is basically the core engine behind ceph.
  6. What are the ceph components? In a way it is easy to understand/remember we can think of ceph components as two groups: 1. the Ceph Storage Cluster which includes the Monitor, the OSD and the Metadata server, these are the three components that form the Ceph Storage Cluster 2 And, on the client side, we have: Ceph Block Device (a.k.a., RBD) which provides resizable, thin-provisioned block devices with snapshotting and cloning Ceph Object Storage (a.k.a., RGW) which is the service which provides REST API that are compatible with Amazon S3 and OpenStack Swift. Ceph Filesystem. Is the service which provides a POSIX compliant filesystem usable with mount or as a filesytem in user space. Custom implementation created by using librados, a library with support to C, C++, Python, Ruby, PHP
  7. A little bit more on the components of the Ceph Storage Cluster. We have the Ceph Monitor, which contains a master copy of the cluster map. The cluster map is basically a full map of the cluster with information about the cluster members, their status, the health...). So the Ceph monitor has all the information which are needed by the clients to access data. Then we have the OSD which is the piece of software which primarily handles the operations of storing data on the disk. So the OSD can also be seen as the disk itself, or the partition, where data is put, the container of our data And last, the Metadata server which provides the filesystem service with the purpose of storing filesystem metadata. This is something new and not actually ready for production environment. Please consider it is not supported by upstream either.
  8. To give you an architectural overview of the ceph components, - top left we have the clients, RADOS block devices, RADOS gateway, or custom clients... - and then we have the components of the Ceph Storage Cluster we have seen with the previous slide. We have some monitors, some OSDs annd some MDSs.
  9. In this slide we hide the metadata servers as, it is not supported at the moment, and we can see the main communication among the other components, where basically: - the Ceph clients communicate with the monitor to get the cluster map and for the authentication, if used. After that it performs data I/O with the OSDs. - and where the Ceph monitors communicate with the OSDs to get a status report in order to maintain the cluster map to be provided to the client. It is very important to mention that a cluster will run just fine with a single monitor; however, this is a single-point-of-failure. To ensure high availability in a production Ceph Storage Cluster, you should run Ceph with multiple monitors (at least 3) so that the failure of a single monitor WILL NOT bring down your entire cluster.
  10. To understand the deployment process we have to understand a couple of more things: - A Ceph storage cluster requires at least one monitor, however, this is a SPOF so at least 3 in a production environment, - And at least as many OSDs as copies of an object stored on the cluster. Of course we can have more OSDs for limitless scalability in terms of size. The first step to create the cluster is bootstrapping the initial monitor, which is also where we set important criteria for the cluster. After that, we can add further monitors as well as OSDs to expand the cluster
  11. We won&amp;apos;t go through each command here, first of all because we do not have time for this in the session but also because with SUSE Storage on SLE12 there will be ceph-deploy as a tool which includes most of the single commands reported here. Anyway, it is important to understand that - we have to create a configuration file ceph.conf with at least the some mandatory information, for example the UUID and then perform a series of commands until we start the monitor daemon and verify that it is actually running
  12. Here we go ahead with the setup of the first monitor by creating the administration keyring as well as the client keyring.
  13. At this point, once we have finished with the configuration we can finally start the monitor and verify that is up and running..
  14. Once you have your monitor up and running, you can then start to add the OSDs. Please consider the cluster cannot reach an active + clean state until you have enough OSDs to handle the number of copies of an object (e.g., osd pool default size = 2 requires at least two OSDs).
  15. And once you have your OSD up and running you can continue to add as many OSDs as you need in your cluster, of course. LAB!!!! So, this concludes the introduction to ceph. At this point let me switch to my lab environment to create my first ceph storage cluster with one monitor and one OSD, very very quickly.
  16. Ok, back to the original topic.. We have one server with one Ceph Monitor and one OSD, next our step is to expand the cluster by adding either a new Ceph Monitor or a new OSD. Again, we will do this in a different way, as we don&amp;apos;t want to install the OS from zero, then configure the OS So, lets now talk about SLES as PXE Boot server now as a system provisioning solution which suits very well the need to expand our ceph cluster.
  17. A PXE boot server is basically a server which is used to install new systems from the network, without using the installation media. So, it is about importing the installation media on the server and make this available to the network for the installation Other than this, the very interesting thing is actually about using AutoYaST along with the PXE boot server to perform a fully automated installation. Eventually it is about a DHCP server, a TFTP server, syslinux and the software repositories. Let&amp;apos;s have a look at those components one-by-one
  18. If you do not have already a DHCP server in your network than you have to create one., of course. The important parameters are “next-server” and “filename” which are basically needed to specify the server and the file to be loaded by the DHCP clients for the network installation.
  19. The Trivial File Transfer Protocol is needed to let clients download the file to boot from the network. Once installed, we need to enable it , either in the extended internet service daemon configuration, like here, or as stand-alone daemon if we prefer.
  20. The PXE boot server is, also known as “syslinux” is the boot loader for Linux and it is available with the rpm package “syslinux”, very simple. Once installed, we can proceed with the configuration of the PXE boot server by creating a file system structure similar to this one where basically we have: - the default file, which is the configuration file of syslinux - the display file “f1”, which contains what the user will see with the boot loader - initrd and linux files from the installation media of the distribution we want to make installable via network.
  21. Here we have an example of the display file “f1”. We can create the content we want as long as the label we define is actually associated with the PXE configuration file. Further info in the man page of syslinux, of course, but you can also find a lot of documents in the internet and our knowledgebase with all the details of the configuration of a PXE Boot server.
  22. Here an example of a more complex configuration of the PXE boot server where we have more distributions available for the network installation. We copiy initrd and linux files from several distributions, SLES10, SLES11, SLES12, whatever..) ...and the configuration file as well as the display file configured in accord with it In this way the user can boot the new server, the commodity hardware, and decide to install from the network a distribution or another.
  23. Eventually, the last component is the installation server, which is the repository where the software for installation is hosted, it could be HTTP or FTP, and it is very easy to configure by using yast. So, now, lets get back to our lab environment and let me show you this all in practice. FORLAB This system we have just installed where we have the monitor and OSD running, is also configured as PXE boot server..
  24. As anticipated before, using AutoYaST2 along with the PXE boot server we can install our SUSE Linux systems automatically, which means without user intervention and/or when customization is required. It also known as the unattended installation. Using AutoYaST2, multiple systems sharing the same environment but not necessarily identical hardware, our commodity hardware, and performing similar tasks, like the nodes of our Ceph cluster, can be installed in parallel, very easily and quickly. So, a ceph storage cluster is absolutely a good use case for using autoyast.
  25. Lets take a quick look at some sections of the autoyast profile, it is a XML file, which are more interesting to our case. Here we can see the configuration of the add-on product, which is basically our repository for the ceph software. We can also see the partitioning configuration, where, for this lab and only for this lab, we are using a logical volume as storage for our OSD.
  26. Here we can see the networking configuration, and this is very interesting to understand a very useful trick we can use with autoyast As you can see here we are using some pseudo-variables for the hostname, the domain name, the ip address and the netmask, which we are going to replace with the right values the user is going to provide during the installation. Let me go to the next couple of slides in order to show you what I mean.
  27. Here we are customizing our installation process, and we do this by creating a new dialog with a list of asks, or questions. First question is about the hostname. All we do here is storing the user value into a file which we will use later
  28. The next question is more complex, here we ask for the ip address and then execute a list of checks by using some bash functions, this is only in order to validate the user input so that the IP address is actually a valid IP address and the hostname is actually a valid hostname, until we eventually....
  29. Replace our pseudo-variables we have seen before with the user input values we actually need for the installation of this system. And we do this by using sed and replacing the current autoyast profile with a new one. So basically, we are saying: All I need for the installation of this system, this ceph node, is the hostname and ip address, we could add whatever we need of course, for example a specific configuration for the storage, the file system choice, XFS or BrtFS, and then let the installation goes by itself.