Tuesday, August 6th session of the vBrownBag OpenStack Sack Lunch Series: Couch to OpenStack. We cover Cinder, the Block Storage Service that presents volumes to OpenStack instances. Credit to Ken Pepple for the OpenStack Project Diagram
2. - git clone https://github.com/bunchc/Couch_to_OpenStack.git
- cd Couch_to_OpenStack
- vagrant up
Build Time!
3. - Subscribe & Recordings: http://bit.ly/BrownbagPodcast
- Sign up for the rest of the series:
http://openstack.prov12n.com/about-couch-to-openstack/
Some Logistics
5. - New Edition: http://www.packtpub.com/openstack-cloud-
computing-cookbook-second-edition/book
- Old Edition: http://amzn.to/12eI6rX
Buy the Book
6. 7/2/2013 – Intro to OpenStack
7/9/2013 – Vagrant Primer
7/16/2013 – Identity services (Keystone)
7/23/2013 – Image services (Glance)
7/30/2013 – Compute Services (Nova)
8/6/2013 – Block Storage / Volume Services (Cinder) << We Are Here
8/13/2013 – Networking Services (Neutron fka Quantum)
8/20/2013 – Monitoring & Troubleshooting
8/27/2013 – HA OpenStack
9/3/2013 – DevOps Deployments
Note: Dates are subject to change depending on how far we get in each lesson.
The Rest of the Series
7. Use the automated Nova Install and manually install
Cinder
Remember we have a G+ Support group here:
http://bit.ly/C2OSGooglePlus
Homework Review
8. - Block Storage for Cloud Instances over iSCSI
- Originally part of Nova as nova-volume. Now, a
separate project.
- Cinder Volumes are presented as /dev/vd* in Linux
Instances and as a new Volume to Storage Manager in
Windows Instances.
- Mount and format them like normal disks.
Cinder Intro
9. - Creates the Controller, Compute, and Cinder (Block)
Nodes
- Sets variables required for Cinder deployment
- Creates a Cinder Service and Endpoint in Keystone
- Updates MySQL
- Creates a Cinder DB
- Assigns the Cinder User to the DB
- Installs Cinder
- Configures Cinder settings
Build – What’s it doing?
15. Cinder Component Functions
Component Purpose
cinder-api Interact with users and other OpenStack services
cinder-scheduler Determine which Cinder node to present storage from
cinder-volume* Block Storage Server
tgtd* A Linux iSCSI target. Not an OpenStack Component, but
important for test environments.
16. - Private Network for OpenStack Compute instances internally:
nova-manage network create privateNet --fixed_range_v4=10.10.<3rd Octet>.2/24
--network_size=20 --bridge_interface=eth2
- Assign a floating (Public) IP address for users to access to the Instance:
nova-manage floating create --ip_range=172.16.<3rd Octet>.1/28
- Create a keypair using Nova Client with the following commands:
nova keypair-add demo > /vagrant/demo.pem
chmod 0600 /vagrant/*.pem
- Nova Security Rules to enable ping and ssh to our Instances:
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
- Fix nova-network on the Compute Node
killall dnsmasq
sudo stop nova-network
sudo start nova-network
- Deploy an Ubuntu Image
nova boot myInstance --image <get UUID from nova image-list> --flavor 2 --key_name demo
Configure a Nova Instance
17. - Verify the source LVM Volume (cinder-volumes) was created
pvscan
PV /dev/sda5 VG precise64 lvm2 [79.76 GiB / 0 free]
PV /dev/loop2 VG cinder-volumes lvm2 [5.00 GiB / 5.00 GiB free]
Total: 2 [84.75 GiB] / in use: 2 [84.75 GiB] / in no VG: 0 [0 ]
- Create the Cinder volume that we will present to the VM
cinder create --display-name c2OS 1
- Verify the volume was created
cinder list
lvdisplay cinder-volumes
nova volume-list (command only works on the Nova or Controller Nodes)
Create the Cinder Volume
18. Use the Cinder Volume
- Attach the Volume to the Instance
nova volume-attach <instance_uuid> <volume_uuid> /dev/vdc
- Login to the Instance
ssh -i /vagrant/demo.pem ubuntu@<instance IP>
- Format and mount the volume
fdisk -l
sudo mkfs.ext4 /dev/vd<substitute correct letter>
sudo mkdir /mnt1
sudo mount /dev/vdb /mnt1
df -h
20. For next week’s session, we will add another node to the deployment: Quantum (Neutron)
Networking.
We will need perform a few extra actions than we have done for the previous sessions:
1. Edit the Vagrantfile to generate an additional server give it the name “quantum”
2. The additional server that we create should have its own shell script file with its
hostname as the filename (ex: quantum.sh).
3. The controller.sh will need to be extended to create a quantum database, an endpoint,
service, etc. Also, the nova.conf file will need to be extended to use Quantum for Network
Services
4. Post ideas, questions, comments on the Google Plus Community: http://bit.ly/C2OSGooglePlus
Homework!
Hinweis der Redaktion
If you see the following after deploying your Instance:root@compute:~# ps -ef | grep dns107 4450 1 0 14:31 ? 00:00:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.confroot 6003 5897 0 19:17 pts/3 00:00:00 grep --color=auto dnsMay need to do the following:killalldnsmasqservice nova-network restartroot@compute:~# ps -ef | grepdnsnobody 6259 1 0 19:22 ? 00:00:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=10.10.139.1 --except-interface=lo --dhcp-range=set:privateNet,10.10.139.3,static,255.255.255.224,120s --dhcp-lease-max=32 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro --domain=novalocalroot 6260 6259 0 19:22 ? 00:00:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=10.10.139.1 --except-interface=lo --dhcp-range=set:privateNet,10.10.139.3,static,255.255.255.224,120s --dhcp-lease-max=32 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro --domain=novalocalroot 6775 5897 0 19:26 pts/3 00:00:00 grep --color=auto dns