SlideShare ist ein Scribd-Unternehmen logo
1 von 69
Downloaden Sie, um offline zu lesen
OpenStack: Inside Out

Etsuji Nakai
Senior Solution Architect
Red Hat
ver1.0 2014/02/22
1
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
$ who am i
■

Etsuji Nakai

- Senior solution architect
and Cloud evangelist at Red Hat.

- The author of “Professional Linux Systems” series.
●

Available in Japanese/Korean. Translation offering from publishers are welcomed ;-)

Self-study Linux
Deploy and Manage by yourself

Professional Linux Systems
Technology for Next Decade

Professional Linux Systems
Deployment and Management

Professional Linux Systems
Network Management

2
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Contents
■

Overview of OpenStack

■

Major components of OpenStack

■

Internal architecture of Nova and Cinder

■

Architecture overview of Neutron

■

Internal architecture of LinuxBridge plugin

■

Internal architecture of Open vSwitch plugin

■

Configuration steps of virtual network

Note: Use of RDO (Grizzly) is assumed in this document.

3

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Overview of OpenStack

4
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Computing resources in OpenStack cloud
■

The end-users of OpenStack can create and configure
the following computing resources in their private
tenants through web console and/or APIs.

OpenStack User

External Network

- Virtual Network
- VM Instances
- Block volumes
■

Each user belongs to some projects.
- Users in the same project shares
the common computing resources
in their project environment.

Project Environment
Virtual Router

Virtual Switches

- Each project owns (virtual)
computing resources which are
independent of other projects.
OS

User Data

VM Instances

Block volumes
5

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Logical view of OpenStack virtual network
■

Each tenant has its own virtual router which works like "the broadband router in
your home network."
- Tenant users add virtual switches behind the router and assign private subnet addresses to
them. It's possible to use overlapping subnets with other tenants.

■

When launching an instance, the end-user selects virtual switches to connect it.
- The number of virtual NICs of the instance corresponds to the number of switches to
connect. Private IPs are assigned via DHCP.

External network

Virtual router
for tenant A

Virtual switch
192.168.101.0/24

Virtual router
for tenant B

Virtual switch
192.168.102.0/24
6
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Private IP and Floating IP
■

When accessing form the external network, "Floating IP" is attached to the VM
instance.
- A range of IP addresses of the external network which can be used as Floating IP are pooled
and distributed to each tenant in advance.
- Floating IP is NAT-ed to the corresponding Private IP on the virtual router.
- Accessing from VM instance to the external network is possible without assigning Floating
IP. IP masquerade feature of the virtual router is used in this case.
Connecting from the external
network with Floating IP

Floating IP
Private IP

Web Server

Connecting between VM instances
with Private IP
Private IP

DB Server
7

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
VM instance creation
■

When launching a new VM instance, the following options should be specified.
- Instance type (flavor)
External network

- Template image
- Virtual network
- Security group
- Key pair
Format

Description

raw

Flat image file

AMI/AKI/ARI

Used with Amazon EC2

qcow2

Used with Linux KVM

VDI

Used with VirtualBox

VMDK

Used with VMware

VHD

Used with Hyper-V

Security Group

Supported import image format

Template
image

Download

OS

It's possible to
connect multiple
networks.
8

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Key pair authentication for SSH connection
■

A user registers his/her public key in advance. It's injected to guest OS when
launching a new instance.
- Key pairs are registered for each user. They are not shared with multiple users.
VM instance
(3) Authenticate with secret key.
Secret key
Public key
(2) Public key is injected to guest OS.

(1) Register public key in advance.

User information database
9
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Instance types and corresponding disk areas
■

The following is the list of instance types created by default.
- The root disk is extended to the specified size after being copied from the template image
(except m1.tiny).
Instance type
(flavor)

Memory

root
disk

temp
disk

swap
disk

m1.tiny

1

512MB

0GB

0

0

m1.small

1

2GB

20GB

0

0

m1.medium

2

4GB

40GB

0

0

m1.large

4

4GB

80GB

0

0

m1.xlarge
■

vCPU

8

8GB

160GB

0

0

The admin users can define new instance types.
- The following is an example of using temp disk and swap disk.
NAME
vda
└─vda1
vdb
vdc

MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
252:0
0 20G 0 disk
252:1
0 20G 0 part /
252:16
0
5G 0 disk /mnt
252:32
0
1G 0 disk [SWAP]

root disk

temp disk
swap disk

- Since these disk are discarded when the instance is destroyed, persistent data should be
stored in different places, typically in block volumes.

10

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Snapshot of VM instances
■

By taking snapshot of running instances, you can copy the root disk and reuse it as
a template image.
Template image

Instance snapshot

Launch an instance
from a snapshot.

Launch an instance
from a template image.

OS

OS

Create a snapshot which
is a copy of the root disk.
11
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Block volume as persistent data store
■

Block volumes remain undeleted after destroying a VM instance. It can be used as a
persistent data store.

OS

OS

User Data

It can be re-attached to
another instance.

User Data

(2) Attach to a running instance
to store user data.

(4) Create a new block volume
from the snapshot.

(1) Create a new block volume.
(3) Create a snapshot

12
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Boot from block volume
■

It's possible to copy a template image to a new block volume to create a bootable
block volume.
- When booting from block volume, the contents of guest OS remain undeleted even when the
instance is destroyed.
- You can create a snapshot of the bootable volume, and create a new bootable volume from
it when launching a new instance.

OS

OS

Boot an instance directly
from block volume.
Template
image

OS

Create a block volume
from a template image.

Copy
OS

Create a snapshot.
13
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Major components of OpenStack

14
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Major components of OpenStack
■

OpenStack is a set of component modules for various services and functions.
- Swift : Object store
●

Amazon S3-like object storage

- Nova : Virtual machine life cycle management
- Glance : Virtual machine image catalog
●

Actual images are stored in the backend storage, typically in Swift.

- Cinder : Virtual disk volume
●

Amazon EBS-like volume management

- Keystone : Centralized authentication and service catalogue system
- Neutron : Virtual network management API (formerly known as Quantum)
●

Actual network provisioning is delegated to external plugin modules.

- Horizon : Web based self-service portal

15
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Modules work together through REST API
■

Modules work together through REST API calls and the message queue.
- Operations can be automated with external programs through REST API.
Client PC

Public network
Create virtual machines
Retrieve template images

Network
Node

Create virtual network

Swift

VM template
images

Glance

Horizon

Nova
Scheduler

Keystone

QPID /
MySQL

Authentication
service

Message queue
and backend RDB

Neutron

Nova
Nova
Nova
Compute
Compute
Compute

Start virtual machines
Attache virtual disk volumes
(iSCSI)

Management
network

Cinder

Disk
Images

16
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
API request call
■

There are two cases when API requests are issued.
- When the end-user sends a request call directly or via Horizon dashboard.
- When some components send a request call to another component.

Database
MySQL
Infrastructure
data

Keystone
(User authentication)

Messaging
QPID

Horizon
(Dashboard)
API call

Web access

Message delivery
to agents

Neutron
(Virtual network)

Cinder
(Block volumes)

Nova
(VM instances)

Glance
(VM templates)

Connecting to virtual switches
Attaching block volumes

Downloading
template images

17

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
User authentication for API requests
■

You need to be authenticated before sending requests to APIs.
- End-users/components obtain the "token" for the API operation from Keystone before
sending requests to APIs. (Each component has its user ID representing it in Keystone.)
- When obtaining the token, URL for the target API is also retrieved from Keystone. Endusers need to know only the URL for Keystone API in advance.

Keystone
(User authentication)

Neutron
(Virtual network)

Cinder
(Block volumes)

Horizon
(Dashboard)

Nova
(VM instances)

Glance
(VM templates)

18
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Token mechanism of Keystone authentication
■

■

Since OpenStack clients make many API calls to various components,
authenticating with ID/password for every call is undesirable in terms of security
and performance.
Instead, the clients obtain the "token" as a "license" for API calls in advance, and
send the token ID to the component to use.
- The component receiving the request validates the token ID with Keystone before
accepting the request.
- The generated token is stored in Keystone for a defined period (default: 24hous). Clients
can reuse it until it expires. They don't need to obtain a token for each request call.
Keystone server

Obtain the token
(Authenticated with ID/password)

The generated token
is stored in Keystone.
ID=yyyy

Send back the token ID

Client

Send a request
with the token ID

Validate the token ID
and check the client's role.

19
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Command operations of Keystone (1)
■

When using standard command line tools of OpenStack, you specify user name,
password, tenant and API's URL with the environment variables.
- Keystone API has different URLs (port numbers) for admin users and general users.
- You can also specify them with command line options.
- The following is an example of keystone operation using the default admin user "admin".
This is generated
# cat keystonerc_admin
export OS_USERNAME=admin
by packstack under /root.
export OS_TENANT_NAME=admin
export OS_PASSWORD=714f1ab569a64a3b
export OS_AUTH_URL=http://172.16.1.11:35357/v2.0/
export PS1='[u@h W(keystone_admin)]$ '

Port 35357 is used

# . keystonerc_admin
for admin users.
# keystone user-list
+----------------------------------+------------+---------+-------------------+
|
id
|
name
| enabled |
email
|
+----------------------------------+------------+---------+-------------------+
| 589a800d70534655bfade5504958afd6 |
admin
|
True |
test@test.com
|
| 3c45a1f5a88d4c1d8fb07b51ed72cd55 |
cinder
|
True | cinder@localhost |
| f23d88041e5245ee8cc8b0a5c3ec3f6c | demo_admin |
True |
|
| 44be5165fdf64bd5907d07aa1aaa5dab | demo_user |
True |
|
| cd75770810634ed3a09d92b61aacf0a7 |
glance
|
True | glance@localhost |
| a38561ed906e48468cf1759918735c53 |
nova
|
True |
nova@localhost |
| 157c8846521846e0abdd16895dc8f024 | quantum
|
True | quantum@localhost |
+----------------------------------+------------+---------+-------------------+

20
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Command operations of Keystone (2)
■

The following is an example of showing registered API services and their URLs.
- Command line tools for other components internally use this mechanism to retrieve the API
of the target component.

# keystone service-list
+----------------------------------+----------+----------+----------------------------+
|
id
|
name
|
type
|
description
|
+----------------------------------+----------+----------+----------------------------+
| 5ea55cbee90546d1abace7f71808ad73 | cinder | volume |
Cinder Service
|
| e92e73a765be4beca9f12f5f5d9943e0 | glance | image
| Openstack Image Service
|
| 3631d835081344eb873f1d0d5057314d | keystone | identity | OpenStack Identity Service |
| 8db624ad713e440492aeccac6ab70a90 |
nova
| compute | Openstack Compute Service |
| e9f02d3803ab44f1a369602010864a34 | nova_ec2 |
ec2
|
EC2 Service
|
| 5889a1e691584e539aa121ab31194cca | quantum | network | Quantum Networking Service |
+----------------------------------+----------+----------+----------------------------+
# keystone endpoint-list
+----------------------------------+-----------+------------------------------------------||-+----------------------------------+
|
id
|
region |
publicurl
|| |
service_id
|
+----------------------------------+-----------+------------------------------------------||-+----------------------------------+
| 0e96a30d9ce742ecb0bf123eebf84ac0 | RegionOne | http://172.16.1.11:8774/v2/%(tenant_id)s || | 8db624ad713e440492aeccac6ab70a90 |
| 928a38f18cc54040a0aa53bd3da99390 | RegionOne |
http://172.16.1.11:9696/
|| | 5889a1e691584e539aa121ab31194cca |
| d46cebe4806b43c4b48499285713ac7a | RegionOne |
http://172.16.1.11:9292
|| | e92e73a765be4beca9f12f5f5d9943e0 |
| ebdd4e61571945b7801554908caf5bae | RegionOne | http://172.16.1.11:8776/v1/%(tenant_id)s || | 5ea55cbee90546d1abace7f71808ad73 |
| ebec661dd65b4d4bb12fe67c25b2c77a | RegionOne |
http://172.16.1.11:5000/v2.0
|| | 3631d835081344eb873f1d0d5057314d |
| f569475b6d364a04837af6d6a577befe | RegionOne | http://172.16.1.11:8773/services/Cloud || | e9f02d3803ab44f1a369602010864a34 |
+----------------------------------+-----------+------------------------------------------||-+----------------------------------+
■

Each command line tool provides the "help" sub command to show the list of sub commands
and their details.
# keystone help
# keystone help user-list

<- Showing the list of all sub commands
<- Showing the detail of "user-list" sub command
21
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Template image registration with Glance (1)
■

You can register new template images with Glance. The registered images become
available from Nova.

Keystone
(User authentication)

Neutron
(Virtual network)

Cinder
(Block volumes)

Horizon
(Dashboard)

Nova
(VM instances)

Glance
(VM templates)

22
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Template image registration with Glance (2)
■

The following is an example of registering a new template image with the general
user "demo_user". The image is downloaded from the specified URL.

This file needs to be
created manually.
Port 5000 is used
for general users.

# cat keystonerc_demo_user
export OS_USERNAME=demo_user
export OS_TENANT_NAME=demo
export OS_PASSWORD=passw0rd
export OS_AUTH_URL=http://172.16.1.11:5000/v2.0/
export PS1='[u@h W(keystone_demouser)]$ '
# . keystonerc_demo_user
# glance image-create --name "Fedora19" 
--disk-format qcow2 --container-format bare --is-public true 
--copy-from http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2
# glance image-list
+--------------------------------------+----------+-------------+------------------+-----------+--------+
| ID
| Name
| Disk Format | Container Format | Size
| Status |
+--------------------------------------+----------+-------------+------------------+-----------+--------+
| 702d0c4e-b06c-4c15-85e5-9bb612eb6414 | Fedora19 | qcow2
| bare
| 237371392 | active |
+--------------------------------------+----------+-------------+------------------+-----------+--------+

23
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Virtual network operations with Neutron
■

Through Neutron API, end-users can create virtual network dedicated to their own
tenants.
- Details will be explained in " Configuration steps of virtual network."
Keystone
(User authentication)

Neutron
(Virtual network)

Cinder
(Block volumes)

Horizon
(Dashboard)

Nova
(VM instances)

Glance
(VM templates)

The command name "quantum" has been
replaced with "neutron" in Havana release.

# . keystonerc_demo_user
# quantum net-list
+--------------------------------------+-------------+-------------------------------------------------------+
| id
| name
| subnets
|
+--------------------------------------+-------------+-------------------------------------------------------+
| 843a1586-6082-4e9f-950f-d44daa83358c | private01
| 9888df89-a17d-4f4c-b427-f28cffe8fed2 192.168.101.0/24 |
| d3c763f0-ebf0-4717-b3fc-cda69bcd1957 | private02
| 23b26d98-2277-4fb5-8895-3f42cde7e1fd 192.168.102.0/24 |
| d8040897-44b0-46eb-9c51-149dfe351bbe | ext-network | 1b8604a4-f39d-49de-a97c-3e40117a7516 192.168.199.0/24 |
+--------------------------------------+-------------+-------------------------------------------------------+

24

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
VM instance creation with Nova
■

When Nova receives an instance creation request, it communicates with Glance and
Neutron through API.
- Through Glance API, it downloads the template image to the compute node.
- Through Neutron API, it attaches the launched instance to the virtual network.
Keystone
(User authentication)

Neutron
(Virtual network)

Horizon
(Dashboard)

Cinder
(Block volumes)

Nova
(VM instances)

Glance
(VM templates)

Connecting to virtual switches
Downloading
template images

25

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Command operations to launch an instance (1)
■

The following shows how the end-user checks the necessary information before
launching an instance

# . keystonerc_demo_user
# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| ID | Name
| Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | extra_specs |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| 1 | m1.tiny
| 512
| 0
| 0
|
| 1
| 1.0
| True
| {}
|
| 2 | m1.small | 2048
| 20
| 0
|
| 1
| 1.0
| True
| {}
|
| 3 | m1.medium | 4096
| 40
| 0
|
| 2
| 1.0
| True
| {}
|
| 4 | m1.large | 8192
| 80
| 0
|
| 4
| 1.0
| True
| {}
|
| 5 | m1.xlarge | 16384
| 160 | 0
|
| 8
| 1.0
| True
| {}
|
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
# nova keypair-list
+-------+-------------------------------------------------+
| Name | Fingerprint
|
+-------+-------------------------------------------------+
| mykey | 31:8c:0e:43:67:40:f6:17:a3:f8:3f:d5:73:8e:d0:30 |
+-------+-------------------------------------------------+

Nova retrieves the image list
through Glance API.

# nova image-list
+--------------------------------------+----------+--------+--------+
| ID
| Name
| Status | Server |
Nova retrieves the network list
+--------------------------------------+----------+--------+--------+
through Neutron API.
| 702d0c4e-b06c-4c15-85e5-9bb612eb6414 | Fedora19 | ACTIVE |
|
+--------------------------------------+----------+--------+--------+
# nova net-list
# nova secgroup-list
+--------------------------------------+-------------+------+
+---------+-------------+
| ID
| Label
| CIDR |
| Name
| Description |
+--------------------------------------+-------------+------+
+---------+-------------+
| 843a1586-6082-4e9f-950f-d44daa83358c | private01
| None |
| default | default
|
| d3c763f0-ebf0-4717-b3fc-cda69bcd1957 | private02
| None |
+---------+-------------+
| d8040897-44b0-46eb-9c51-149dfe351bbe | ext-network | None |
+--------------------------------------+-------------+------+

26

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Command operations to launch an instance (2)
■

The following is to launch an instance using the information in the previous page.
# nova boot --flavor m1.small --image Fedora19 --key-name mykey 
--security-groups default --nic net-id=843a1586-6082-4e9f-950f-d44daa83358c vm01
+-----------------------------+--------------------------------------+
| Property
| Value
|
+-----------------------------+--------------------------------------+
| status
| BUILD
|
| updated
| 2013-11-22T06:22:52Z
|
| OS-EXT-STS:task_state
| scheduling
|
| key_name
| mykey
|
| image
| Fedora19
|
| hostId
|
|
| OS-EXT-STS:vm_state
| building
|
| flavor
| m1.small
|
| id
| f40c9b76-3891-4a5f-a62c-87021ba277ce |
| security_groups
| [{u'name': u'default'}]
|
| user_id
| 2e57cd295e3f4659b151dd80f3a73468
|
| name
| vm01
|
| adminPass
| 5sUFyKhgovV6
|
| tenant_id
| 555b49dc8b6e4d92aa74103bfb656e70
|
| created
| 2013-11-22T06:22:51Z
|
| OS-DCF:diskConfig
| MANUAL
|
| metadata
| {}
|
...snip...
+-----------------------------+--------------------------------------+
# nova list
+--------------------------------------+------+--------+-------------------------+
| ID
| Name | Status | Networks
|
+--------------------------------------+------+--------+-------------------------+
| f40c9b76-3891-4a5f-a62c-87021ba277ce | vm01 | ACTIVE | private01=192.168.101.3 |
+--------------------------------------+------+--------+-------------------------+
27
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Command operations to launch an instance (3)
■

You can specify the file with "--user-data" to use the customize script (user data).
- The following is an example of launching an instance with customize script, and
adding a floating IP.
# cat hello.txt
#!/bin/sh
echo 'Hello, World!' > /etc/motd
# nova boot --flavor m1.small --image Fedora19 --key-name mykey 
--security-groups default --nic net-id=843a1586-6082-4e9f-950f-d44daa83358c 
--user-data hello.txt vm01
# nova floating-ip-list
+--------------+-------------+----------+-------------+
| Ip
| Instance Id | Fixed Ip | Pool
|
+--------------+-------------+----------+-------------+
| 172.16.1.101 | None
| None
| ext-network |
| 172.16.1.102 | None
| None
| ext-network |
| 172.16.1.103 | None
| None
| ext-network |
| 172.16.1.104 | None
| None
| ext-network |
| 172.16.1.105 | None
| None
| ext-network |
+--------------+-------------+----------+-------------+
# nova add-floating-ip vm01 172.16.1.101
# ssh -i ~/mykey.pem fedora@172.16.1.101
The authenticity of host '172.16.1.101 (172.16.1.101)' can't be established.
RSA key fingerprint is b7:24:54:63:1f:02:33:4f:81:a7:47:90:c1:1b:78:5a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.16.1.101' (RSA) to the list of known hosts.
Hello, World!
[fedora@vm01 ~]$
28
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Floating IP association with Neutron API
■

When adding a floating IP to an instance with multiple NICs, you need to use
Neutron API to specify the NIC port to associate.
- After identifying the port ID which corresponds to the private IP, associate the floating IP
to the port ID.

# nova boot --flavor m1.small --image Fedora19 --key-name mykey --security-groups default 
--nic net-id=843a1586-6082-4e9f-950f-d44daa83358c 
--nic net-id=d3c763f0-ebf0-4717-b3fc-cda69bcd1957 
Vm01
# nova list
+--------------------------------------+------+--------+--------------------------------------------------+
| ID
| Name | Status | Networks
|
+--------------------------------------+------+--------+--------------------------------------------------+
| e8d0fa19-130f-4502-acfe-132962134846 | vm01 | ACTIVE | private01=192.168.101.3; private02=192.168.102.3 |
+--------------------------------------+------+--------+--------------------------------------------------+
# quantum port-list
+--------------------------------------+------+-------------------+------------------------------------+
| id
| name | mac_address
| fixed_ips
|
+--------------------------------------+------+-------------------+------------------------------------+
| 10c3cd17-78f5-443f-952e-1e3e427e477f |
| fa:16:3e:37:7b:a6 | ... "ip_address": "192.168.102.3"} |
| d0057651-e1e4-434c-a81d-c950b9c06333 |
| fa:16:3e:e6:d9:4c | ... "ip_address": "192.168.101.3"} |
+--------------------------------------+------+-------------------+------------------------------------+
# quantum floatingip-list
+--------------------------------------+------------------+---------------------+---------+
| id
| fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+---------+
| 06d24f23-c2cc-471f-a4e6-59cf00578141 |
| 171.16.1.101
|
|
| 89b49a78-8fd7-461b-8fe2-fba4a341c8a2 |
| 172.16.1.102
|
|
+--------------------------------------+------------------+---------------------+---------+
# quantum floatingip-associate 06d24f23-c2cc-471f-a4e6-59cf00578141 d0057651-e1e4-434c-a81d-c950b9c06333

29

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Operations for key pairs and security groups
■

Security related operations such as creating/registering key pairs and defining
security groups can be done though Nova API.
- The following is to create a new key pair "key01" and save the private(secret) key in
"~/.ssh/key01.pem".
# nova keypair-add key01 > ~/.ssh/key01.pem
# chmod 600 ~/.ssh/key01.pem

- The following is to register the public key of an existing key pair as "key02".
# nova keypair-add --pub-key ~/.ssh/id_rsa.pub key02

- The following is to create a new security group "group01" and allow access to TCP port 22.
# nova secgroup-create group01 "My security group."
# nova secgroup-add-rule group01 tcp 22 22 0.0.0.0/0
■

Note that since security group is now under the control of Neutron, you'd better
know commands to configure them through quantum (neutron) API, too.
# quantum security-group-create group01 --description "My security group."
# quantum security-group-rule-create --protocol tcp 
--port-range-min 22 --port-range-max 22 
--remote-ip-prefix "0.0.0.0/0" group01

30
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Block volume creation with Cinder
■

Block volumes can be created/deleted/snapshot-ed through Cinder API.
- When attaching/detaching block volumes to/from running instances, you need to
send a request to Nova API. Then Nova works together with Cinder through API
calls.
Keystone
(User authentication)

Neutron
(Virtual network)

Cinder
(Block volumes)

Horizon
(Dashboard)

Nova
(VM instances)

Glance
(VM templates)

Attaching block volumes
31
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Command operations for block volumes
■

The following is an example of creating a 5GB block volume and attaching/detaching
to/from a running instance.

# cinder create --display-name volume01 5
# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|
ID
|
Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 78b4d23b-3b57-4a38-9f6e-10e5048170ef | available |
volume01
| 5
|
None
| false
|
|
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
# nova volume-attach vm01 78b4d23b-3b57-4a38-9f6e-10e5048170ef auto
+----------+--------------------------------------+
| Property | Value
|
+----------+--------------------------------------+
| device
| /dev/vdb
|
The device
| serverId | f40c9b76-3891-4a5f-a62c-87021ba277ce |
| id
| 78b4d23b-3b57-4a38-9f6e-10e5048170ef |
| volumeId | 78b4d23b-3b57-4a38-9f6e-10e5048170ef |
+----------+--------------------------------------+

name seen from guest OS.

# nova volume-detach vm01 78b4d23b-3b57-4a38-9f6e-10e5048170ef

32
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Creating bootable volumes
■

You can create a bootable block volume by creating a new volume from a template
image.
- Using the bootable volume, you can boot an instance directly from the block volume.
- The following is an example of creating a bootable volume from an existing template image
and launching an instance with it. ("--image" option is ignored in the boot subcommand, but
you need specify one as a dummy entry.)

# cinder create --image-id 702d0c4e-b06c-4c15-85e5-9bb612eb6414 --display-name Fedora19-bootvol 5

Template image ID
# cinder list
+--------------------------------------+-----------+------------------+------+-------------+----------+-------------+
|
ID
|
Status |
Display Name
| Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------------------+------+-------------+----------+-------------+
| 78b4d23b-3b57-4a38-9f6e-10e5048170ef | available |
volume01
| 5
|
None
| false
|
|
| bdde9405-8be7-48d5-a879-35e37c97512f | available | Fedora19-bootvol | 5
|
None
|
true
|
|
+--------------------------------------+-----------+------------------+------+-------------+----------+-------------+
# nova boot --flavor m1.small --image Fedora19 --key-name mykey 
--security-groups default --nic net-id=843a1586-6082-4e9f-950f-d44daa83358c 
--block_device_mapping vda=bdde9405-8be7-48d5-a879-35e37c97512f:::0 vm02

Block volume ID

Flag to delete the volume after
destroying instance. (1=yes)

# nova volume-list
+----------||-----------+-----------+------------------+------+-------------+--------------------------------------+
| ID
||
| Status
| Display Name
| Size | Volume Type | Attached to
|
+----------||-----------+-----------+------------------+------+-------------+--------------------------------------+
| 78b4d23b-||e5048170ef | available | volume01
| 5
| None
|
|
| bdde9405-||e37c97512f | in-use
| Fedora19-bootvol | 5
| None
| b4cb7edd-317f-44e9-97db-5a04c41a4510 |
+----------||-----------+-----------+------------------+------+-------------+--------------------------------------+
33
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Internal services of Nova and Cinder

34
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Internal services of Nova
Controller node
Nova API

Provide REST API

Compute node

Driver for a specific
hypervisor to be used

Nova Compute

Choose compute node
to launch VM

Compute Driver

Libvirt
Order to
launch VM

Nova Scheduler

Launch VM
VM
instance

Retrieve resource
information
Update resource
information

Nova Conductor

VM
instance

qcow2
overlay image

qcow2
overlay image

Proxy service
for DB access

Database
Glance

/var/lib/nova/instances/<ID>
Overlaying

Download
template image

qcow2
base image

Downloaded image is cached
for a defined period.
Communication via the messaging server

/var/lib/nova/instances/_base
35

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
How messaging server works
■

The internal services and agents of one component (such as Nova) communicate
through the messaging server.
- The messaging server provides "topics" as channels of communication. The
sender put a message in a specific topic. Then the receiver picks the message
from topics which they have subscribed.
- The messages in topics have a flag to specify the delivery model such as "all
subscribers should receive" or "only one subscriber should receive."
- Since multiple senders can put messages in the same topic, it realizes the M:N
asynchronous communication.
Receiving messages

Messaging server
service

Topic A

service
service

Sending messages
Topic B

service

・・

Services which have
subscribed to topic A.
36

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Features of qcow2 disk image
■

■

qcow2 is a disk image format designed for virtual machines which has the
following features.
Dynamic block allocation
- The real (physical) file size is smaller than its logical image size. The file grows as data is
added. It's possible to extend the logical size, too.

■

Overlay mechanism
- You can add an overlay file on top of the backing image. The overlay file contains only the
additional changes from the backing image.
- The backing image can be shared with multiple overlay files. This is useful to reduce the
physical disk usage when a lot of virtual machines is launched with the same template
image.

■

Multiple snapshots
- By taking snapshots of the image, you can reproduce the previous contents of the image, or
create a new image from the snapshot.

37
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Operations on qcow2 disk image
■

qemu-img is a command tool to manipulate qcow2 images.
Creating a image with 5GB logical size.

# qemu-img create -f qcow2 baseimage.qcow2 5G
Formatting 'baseimage.qcow2', fmt=qcow2 size=5368709120 encryption=off
cluster_size=65536 lazy_refcounts=off
Creating a overlay file with
baseimg.qcow2 as a backing image.
# qemu-img create -f qcow2 -b baseimage.qcow2 layerimage.qcow2
Formatting 'layerimage.qcow2', fmt=qcow2 size=5368709120
backing_file='baseimage.qcow2' encryption=off cluster_size=65536 lazy_refcounts=off
# qemu-img info layerimage.qcow2
image: layerimage.qcow2
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 196K
cluster_size: 65536
backing file: baseimage.qcow2
Creating a snapshot.
# qemu-img snapshot -c snap01 layerimage.qcow2
# qemu-img snapshot -l layerimage.qcow2
Snapshot list:
ID
TAG
VM SIZE
DATE
VM CLOCK
1
snap01
0 2013-11-22 17:08:02
00:00:00.000

Creating a new image
from a snapshot.

# qemu-img convert -f qcow2 -O qcow2 -s snap01 layerimage.qcow2 copiedimage.qcow2
Reference:
 https://access.redhat.com/site/documentation/ja-JP/Red_Hat_Enterprise_Linux/6/html-single/  
 Virtualization_Administration_Guide/index.html#sect-Virtualization-Tips_and_tricks-Using_qemu_img

38

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Public key injection mechanism
■

■

Nova Compute injects the public key into "/root/.ssh/authorized_keys" of the local
disk image before launching the instance.
Cloud-Init can also be used to setup public key authentication at the boot time as
it can retrieve the public key through meta-data(*).
- Because allowing root login is undesirable in many cases, you'd better configure
Cloud-Init to create a general user and setup public key authentication for it.
Retrieving the public key from meta-data.
$ curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA5W2IynhVezp+DpN11xdsY/8NOqeF8r7eYqVteeWZSBfnYhKn
8D85JmByBQnJ7HrJIrdMvfTYwWxi+swfFlryG3A+oSll0tT71FLAWnAYz26ML3HccyJ7E2bD66BSditbDITK
H3V66oN9c3rIEXZYQ3A+GEiA1cFD++R0FNKxyBOkjduycvksB5Nl9xb3k6z4uoZ7JQD5J14qnooM55Blmn2C
C2/2KlapxMi0tgSdkdfnSSxbYvlBztGiF3M4ey7kyuWwhE2iPBwkV/OhANl3nwHidcNdBrAGC3u78aTtUEwZ
tNUqrevVKM/yUfRRyPRNivuGOkvjTDUL/9BGquBX9Q== enakai@kakinoha

(*) Especially, when booting from block volume, Nova Compute fails to inject the public key. Use of Cloud-Int is
mandatroy in this case.
39
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Block volume use cases and corresponding APIs
OS

It can be re-attached to
another instance.

OS

User Data

User Data

(2) Attach to a running instance
to store user data.

(4) Create a new block volume
from the snapshot.

(1) Create a new block volume.
(3) Create a snapshot

Template
image

OS

Create a block volume
from a template image.

■

Cinder API
- volume create/delete/list/show
(create from snapshot, image)

OS

- snapshot create/delete/list/show
■

Nova API
- volume attach/detach

40

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
How Nova and Cinder works together
■

■

In typical configuration, block volumes are created as LUNs in iSCSI storage boxes.
Cinder operates on the management interface of the storage through the
corresponding driver.
Nova Compute attaches it to the host Linux using the software initiator, then it's
attached to the VM instance through KVM hypervisor.
VM instance
/dev/vdb

Cinder
Nova Compute

Virtual disk

Create LUNs
Storage box

Linux KVM
/dev/sdX

iSCSI LUN

iSCSI SW
Initiator

iSCSI Target

41
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Internal services of Cinder
■

Volume drivers handle the management interface of corresponding storage.
- When using multiple types of storage, Cinder Scheduler choose the driver to be
used based on the requested storage type.
Provide REST API

Controller node
Cinder API

Cinder-Volume

Driver for a specific
type of storage
Create LUNs

Storage box

Volume Driver
Cinder Scheduler

Choose an appropriate
volume driver

Volume information

Database

LUN
iSCSI connection
Nova Compute
Nova API

Communication via the messaging server

Provide REST API

42

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Using LVM driver
■

Cinder provides the LVM driver as a reference implementation which uses Linux
LVM instead of external storage boxes.
- Snapshot feature is implemented with LVM snapshot where the delta volume has
the same size as the base volume.
Cinder
VM instance
/dev/vdb

Virtual disk

Linux KVM
/dev/sdX

Create logical volumes and
export as iSCSI LUNs.

Nova Compute

iSCSI LUN

VG: cinder-volumes

LV

iSCSI SW
Target (tgtd)

iSCSI SW
Initiator
43
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Using NFS driver
■

Cinder also provides the NFS driver which uses NFS server as a storage backend.
- The driver simply mounts the NFS exported directly and create disk image files
in it. Compute nodes use NFS mount to access the image files.
Cinder
VM instance
/dev/vdb

Nova Compute

NFS mount

Virtual disk

NFS server
Linux KVM
・・・
・・・

NFS mount
44
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Using GlusterFS driver
■

There is a driver for GlusterFS distributed filesystem, too.
- Currently it uses FUSE mount mechanism. This will be replaced with more optimized
mechanism (libgfapi) which bypasses the FUSE layer.

VM instance

Cinder

Nova Compute

FUSE mount
/dev/vdb

Virtual disk

GlusterFS cluster

Linux KVM

・・・
・・・

FUSE mount
45
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Architecture overview of Neutron

46
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Logical view of Neutron's virtual network
■

Each tenant has its own virtual router which works like "the broadband router in
your home network."
- Tenant users add virtual switches behind the router and assign private subnet addresses to
them. It's possible to use overlapping subnets with other tenants.

■

When launching an instance, the end-user selects virtual switches to connect it.
- The number of virtual NICs of the instance corresponds to the number of switches to
connect. Private IPs are assigned via DHCP.

External network

Virtual router
for tenant A

Virtual switch
192.168.101.0/24

Virtual router
for tenant B

Virtual switch
192.168.102.0/24
47
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Plugin architecture of Neutron
■

The actual work of creating virtual network is done by plugin agents.
- There are various plugins for Neutron including commercial products from third vendors.
- OpenStack provides "LinuxBrdige plugin" and "Open vSwitch plugin" as a standard/reference
implementation.
Network controller

Controller node

Provide REST API

Create virtual routers

L2 Agent

Create virtual L2 switches

DHCP Agent

Neutron service

L3 Agent

Assign private IP addresses

Compute node
L2 Agent

Create virtual L2 switches

Communication via the messaging server
48
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Network configuration with standard plugin
■

The following shows the typical configuration using LinuxBridge plugin or Open
vSwitch plugin.
- L3 Agent on the network node provides the virtual router functions connecting the private
and public network. ("eth0" of each node is used for accessing host Linux, not for VM
instance communication.)
- It's not possible to have multiple network nodes. Scalable network feature is under
development today.
Public network
Private network
eth0

eth1

eth2

eth0

L2 Agent
Provide DHCP function
for private networks

eth1
L2 Agent

DHCP Agent

Create virtual
仮想スイッチ作成
L2 switches

VM

eth1

eth0

VM

L2 Agent
VM

・・・

VM

L3 Agent
Network node

Provide virtual
router function

Compute node
49
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Internal architecture of LinuxBridge plugin

50
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Internal architecture of LinuxBridge plugin
■

This section describes how LinuxBridge plugin implements the virtual network in
the drawing below as a concrete example.

External network

Virtual router

Virtual L2 switch
private01

vm01

Virtual L2 switch
private02

vm02

vm03
51

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Configuration inside compute node
■

Linux bridges are created for each virtual switch. Outside the compute node, the
network traffic of each switch is separated with VLAN.
Configured by Nova Compute
vm01
IP

eth0

vm02
IP

eth0

vm03

IP

IP

eth1

eth0

IP is assigned from
dnsmasq on network node.
brqyyy

brqxxxx
private01
VLANs are created
for each virtual L2 switch.

Physical L2 switch
for private network

private02
eth1.102

eth1.101
eth1

Configured by L2 Agent

VLAN101
VLAN102
52
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Configuration inside network node
To/From public network
eth1
brqxxxx

Conceptually, there exists
a virtual router here.

IP

External GW IP
Internal GW IP

IP
Configured by DHCP Agent
dnsmasq is started
for each subnet.

■

■

Virtual router is implemented
with Linux's packet forwarding
feature.
dnsmasq is used as a DHCP server
for providing private IP addresses
for each subnet.
- IP address is assigned corresponding
to a MAC addresses of virtual NIC.

qg-VVV

NAT and filtering is
done by iptables.
IP
qr-WWW

qr-YYY
dnsmasq

dnsmasq
ns-XXX

Configured by L3 Agent

IP

ns-ZZZ

IP

brqyyy

brqxxxx
private01

private02
eth1.102

eth1.101
eth2

Configured by L2 Agent

To/From private network

53

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Internal architecture of Open vSwitch plugin

54
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
What is Open vSwitch?
■

Open vSwitch is software to create virtual L2 switch on top of Linux. It supports
many features comparable to physical L2 switch products.
- Especially, since it supports the OpenFlow protocol which provides a fine-grained packet
control feature, Open vSwitch is widely used for virtual network applications.
Supported features of Open vSwitch(http://openvswitch.org/features/)
●

●
●
●
●
●
●
●
●
●
●
●
●
●
●
●

Visibility into inter-VM communication via NetFlow, sFlow(R), IPFIX, SPAN, RSPAN, and
GRE-tunneled mirrors
LACP (IEEE 802.1AX-2008)
Standard 802.1Q VLAN model with trunking
BFD and 802.1ag link monitoring
STP (IEEE 802.1D-1998)
Fine-grained QoS control
Support for HFSC qdisc
Per VM interface traffic policing
NIC bonding with source-MAC load balancing, active backup, and L4 hashing
OpenFlow protocol support (including many extensions for virtualization)
IPv6 support
Multiple tunneling protocols (GRE, VXLAN, IPsec, GRE and VXLAN over IPsec)
Remote configuration protocol with C and Python bindings
Kernel and user-space forwarding engine options
Multi-table forwarding pipeline with flow-caching engine
Forwarding layer abstraction to ease porting to new software and hardware platforms
55
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
What is OpenFlow?
■

OpenFlow is a protocol to provide fine-grained control of packet forwarding from
an external controller.
- OpenFlow switches query the external controller about how received packets should be
handled.
- Since the programmability of controller software gives flexibility over packet operations,
it suits to creating multi-tenant virtual network. For example, it can decide the forwarding
port according to source/destination MAC addresses, modify VLAN tag in the header, etc.
OpenFlow controller

Controller instructs how packets should
be handled through OpenFlow protocol.

OpenFlow switches

56
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Internal architecture of Open vSwitch plugin
■

This section describes how Open vSwitch plugin implements the virtual network in
the drawing below as a concrete example.

External network

Tenant A
Virtual router

Tenant B
Virtual router

Virtual L2 switch
projectA

vm01

Virtual L2 switch
project B

vm02

vm03

vm04
57

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Configuration inside compute node (1)
■

See the next page for explanation.
Configured by Nova Compute

IP

vm01

IP

eth0

qvoXXX

vm02

IP

eth0

qvoYYY

eth0

qvoZZZ

Port VLAN tag:1

"Internal VLAN" is assigned
to each virtual L2 switch.

vm03

IP

vm04
eth0

qvoWWW

Port VLAN tag:2

br-int
int-br-priv

phy-br-priv

Configured by L2 Agent

br-priv

Translation between
"Internal" and "External" VLAN
- Internal VLAN1<->External VLAN101
- Internal VLAN2<->ExternalVLAN102

eth1

VLAN101
VLAN102

Open vSwitch
58
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Configuration inside compute node (2)
■

Virtual NICs of VM instances are connected to the common "Integration switch (brint)".
- Internal VLAN is assigned to the connected port according to the (logical) virtual L2
switch to be connected.

■

Connection to the physical L2 switch for the private network is done through the
"Private switch (br-priv)".
- External VLANs are assigned on the physical switch according to the (logical) virtual L2
switch. The translation between Internal and External VLAN is done with OpenFlow.

■

In addition to VLAN, other separation mechanisms such as GRE tunneling can be
used over the physical network connection.
- In the case of GRE tunneling, the translation between "Internal VLAN" and "GRE tunnel ID"
is done with OpenFlow.

59
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Configuration inside network node
■

To/From public network

Since two virtual routers
are configured, there are
two paths of packet
forwarding.

eth1

IP

IP

qg-VVV

IP
tapXXX

qg-CCC

IP

NAT and filtering is
done by iptables.

dnsmasq

Configured by DHCP Agent

Configured by L3 Agent

br-ex

IP
qr-YYY

dnsmasq
IP

qr-BBB

ポートVLAN tag:1

tapAAA

ポートVLAN tag:2

br-int
int-br-priv
Translation between
"Internal" and "External" VLAN
- Internal VLAN1<->External VLAN101
- Internal VLAN2<->ExternalVLAN102

Configured by L2 Agent
phy-br-priv

br-priv
eth2

To/From private network

60

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Overlapping subnet with network namespace
■

■

When using multiple virtual routers, network node needs to have independent
NAT/filtering configurations for each virtual router to allow the use of
overlapping subnet among multiple tenants. This is done with Linux's network
namespace feature which allows Linux to have multiple independent network
configurations.
The following is the steps to use network namespace.
- Create a new namespacne.
- Allocate network ports inside the namespace. (Both physical and logical ports can be used.)
- Configure networks (port configuration, iptalbes configuration, etc.) inside the namespace.
- Then the configuration is applied to network packets which go through the network port
inside this namespace.

■

L3 Agent of LinuxBridge / Open vSwitch plugin uses network namespace.
- It can be configured not to use namespace, but the use of overlapping subnet should be
disabled in this case.
61
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
The overall picture of Open vSwitch plugin (1)
■

See the next page for details.
External network

Network
namespace

Open vSwitch

Network node
eth1
br-ex
dnsmasq

Virtual router's GW IP on
external network side.

dnsmasq
iptablesで
NAT接続

br-int
br-priv
VLAN ID mapping for
virtual L2 switches
is done with OpenFlow

eth2

VM1

Compute node
VM2

br-int
br-priv

Virtual router's
GW IP on private
network side.

eth1

VLAN Trunk

VLAN ID mapping for
virtual L2 switches
is done with OpenFlow

62
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
The overall picture of Open vSwitch plugin (2)
■

While an end-user defines the virtual network components such as virtual L2
switches and virtual routers, the agents work in the following way.
- When a virtual L2 switch is defined, L2 Agent configures the VLAN ID mapping on "br-int"
and "br-priv" so that compute nodes are connected each other via VLAN. At the same time,
DHCP Agent starts a new dnsmasq which provides the DHCP function to the corresponding
VLAN.
- When a virtual router is defined and connected to the external network, L3 Agent creates a
port on "br-ex" which works as an external gateway of the virtual router.
- When a virtual L2 switch is connected to the virtual router, L3 Agent creates a port on "brex" which works as an internal gateway of the virtual router. It also configures iptables to
start NAT connection between public and private networks.

■

In addition to the agents which have already been explained, there exists
"Metadata Proxy Agent" which helps the metadata mechanism to work.
- iptalbes on network node is configured so that packets to "169.254.169.254:80" are
redirected to Metadata Proxy Agent. This agent determines the instance which sent the
packet from the source IP address, and send back the corresponding message including the
requested metadata.
63
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Packet redirection to Metadata Proxy Agent
■

The following commands show iptables configuration within the namespace which
contains the virtual router. There is a redirection entry where packets to
"169.254.169.254:80" are redirected to Metadata Proxy Agent on the same node.

# ip netns list
qrouter-b35f6433-c3e7-489a-b505-c3be5606a643
qdhcp-1a4f4b41-3fbb-48a6-bb12-9621077a4f92
qrouter-86654720-d4ff-41eb-89db-aaabd4b13a35
qdhcp-f8422fc9-dbf8-4606-b798-af10bb389708

Nemespace containing
the virtual router

# ip netns exec qrouter-b35f6433-c3e7-489a-b505-c3be5606a643 iptables -t nat -L
...
Chain quantum-l3-agent-PREROUTING (1 references)
target
prot opt source
destination
REDIRECT
tcp -- anywhere
169.254.169.254
tcp dpt:http redir ports 9697
...
# ps -ef | grep 9697
root
63055
1 0 7月09 ?
00:00:00 python /bin/quantum-ns-metadata-proxy
--pid_file=/var/lib/quantum/external/pids/b35f6433-c3e7-489a-b505-c3be5606a643.pid
--router_id=b35f6433-c3e7-489a-b505-c3be5606a643 --state_path=/var/lib/quantum
--metadata_port=9697 --verbose --log-file=quantum-ns-metadata-proxyb35f6433-c3e7-489a-b505c3be5606a643.log --log-dir=/var/log/quantum
■

Note that "NOZEROCONF=yes" should be set in "/etc/sysconfig/network" of guest
OS when using the metadata mechanism.
- Without it, packets to "169.254.0.0/16" are not routed to outside the guest OS due to
APIPA specification.

64

Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Configuration steps of virtual network

65
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Configuration steps of virtual network (1)
■

The following is the steps for configuring virtual network with quantum command.
- We use the following environment variables as parameters specific to each setup.
public="192.168.199.0/24"
gateway="192.168.199.1"
nameserver="192.168.199.1"
pool=("192.168.199.100" "192.168.199.199")

- Define an external network "ext-network".
tenant=$(keystone tenant-list | awk '/ services / {print $2}')
quantum net-create 
--tenant-id $tenant ext-network --shared 
--provider:network_type flat --provider:physical_network physnet1 
--router:external=True
●

●

●

●

Since the external network is shared by multiple tenants, the owner tenant (--tenant-id)
is "services" (a general tenant for shared services), and "--shared" option is added.
As we suppose there's no VLANs in the external network, network_type is "flat".
In the plugin configuration file (plugin.ini), Open vSwitch for the external network
connection (br-ex) has an alias "physnet1" which is specified as physical_network here.
"--router:external=True" is specified to allow to be a default gateway of virtual routers.

66
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Configuration steps of virtual network (2)
- Define a subnet of the external network.
quantum subnet-create 
--tenant-id $tenant --gateway ${gateway} --disable-dhcp 
--allocation-pool start=${pool[0]},end=${pool[1]} 
ext-network ${public}
●

"--allocation-pool" specifies the IP address pool (the range of IP addresses which can be
used by OpenStack as router ports and floating IP, etc.)

- Define a virtual router "demo_router" for the tenant "demo", and attach it to the external
network.
tenant=$(keystone tenant-list|awk '/ demo / {print $2}')
quantum router-create --tenant-id $tenant demo_router
quantum router-gateway-set demo_router ext-network
●

The owner tenant (--tenant-id) is "demo".

Alias setting for Open vSwitch in plugin configuration file (/etc/quantum/plugin.ini).
bridge_mappings=physnet1:br-ex,physnet2:br-priv
tenant_network_type=vlan
network_vlan_ranges=physnet1,physnet2:100:199

Mapping between alias and
actual Open vSwitch name
VLAN ID range for each Open vSwitch.
(VLAN is not used for physnet1.)

67
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
Configuration steps of virtual network (3)
- Define a virtual L2 siwtch "private01".
quantum net-create 
--tenant-id $tenant private01 
--provider:network_type vlan 
--provider:physical_network physnet2 
--provider:segmentation_id 101
●

●

Since VLAN is used as a separation mechanism of private networks, "vlan" is specified for
network_type . VLAN ID is specified with segmentation_id.
In the plugin configuration file (plugin.ini), Open vSwitch for the private network
connection (br-priv) has an alias "physnet2" which is specified as physical_network here.

- Define a subnet of "private01", and connect it to the virtual router.
quantum subnet-create 
--tenant-id $tenant --name private01-subnet 
--dns-nameserver ${nameserver} private01 192.168.1.101/24
quantum router-interface-add demo_router private01-subnet
●

"192.168.1.101/24" is specified for the subnet as an example here.

68
Copyright (C) 2014 National Institute of Informatics, All rights reserved.
69
Copyright (C) 2014 National Institute of Informatics, All rights reserved.

Weitere ähnliche Inhalte

Was ist angesagt?

OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
OpenStack超入門シリーズ いまさら聞けないNeutronの使い方OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
OpenStack超入門シリーズ いまさら聞けないNeutronの使い方Toru Makabe
 
日本OpenStackユーザ会 第37回勉強会
日本OpenStackユーザ会 第37回勉強会日本OpenStackユーザ会 第37回勉強会
日本OpenStackユーザ会 第37回勉強会Yushiro Furukawa
 
最近のOpenStackを振り返ってみよう
最近のOpenStackを振り返ってみよう最近のOpenStackを振り返ってみよう
最近のOpenStackを振り返ってみようTakashi Kajinami
 
BPF & Cilium - Turning Linux into a Microservices-aware Operating System
BPF  & Cilium - Turning Linux into a Microservices-aware Operating SystemBPF  & Cilium - Turning Linux into a Microservices-aware Operating System
BPF & Cilium - Turning Linux into a Microservices-aware Operating SystemThomas Graf
 
Rootless Kubernetes
Rootless KubernetesRootless Kubernetes
Rootless KubernetesAkihiro Suda
 
CloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and TroubleshootingCloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and TroubleshootingShapeBlue
 
Linux Networking Explained
Linux Networking ExplainedLinux Networking Explained
Linux Networking ExplainedThomas Graf
 
Volume Encryption In CloudStack
Volume Encryption In CloudStackVolume Encryption In CloudStack
Volume Encryption In CloudStackShapeBlue
 
Openstack Swift overview
Openstack Swift overviewOpenstack Swift overview
Openstack Swift overview어형 이
 
Kolla talk at OpenStack Summit 2017 in Sydney
Kolla talk at OpenStack Summit 2017 in SydneyKolla talk at OpenStack Summit 2017 in Sydney
Kolla talk at OpenStack Summit 2017 in SydneyVikram G Hosakote
 
CETH for XDP [Linux Meetup Santa Clara | July 2016]
CETH for XDP [Linux Meetup Santa Clara | July 2016] CETH for XDP [Linux Meetup Santa Clara | July 2016]
CETH for XDP [Linux Meetup Santa Clara | July 2016] IO Visor Project
 
오픈스택: 구석구석 파헤쳐보기
오픈스택: 구석구석 파헤쳐보기오픈스택: 구석구석 파헤쳐보기
오픈스택: 구석구석 파헤쳐보기Jaehwa Park
 
NFVアプリケーションをOpenStack上で動かす為に - OpenStack最新情報セミナー 2017年7月
NFVアプリケーションをOpenStack上で動かす為に - OpenStack最新情報セミナー 2017年7月NFVアプリケーションをOpenStack上で動かす為に - OpenStack最新情報セミナー 2017年7月
NFVアプリケーションをOpenStack上で動かす為に - OpenStack最新情報セミナー 2017年7月VirtualTech Japan Inc.
 
OpenStack勉強会
OpenStack勉強会OpenStack勉強会
OpenStack勉強会Yuki Obara
 
plotnetcfg入門 | Introduction to plotnetcfg
plotnetcfg入門 | Introduction to plotnetcfgplotnetcfg入門 | Introduction to plotnetcfg
plotnetcfg入門 | Introduction to plotnetcfgKentaro Ebisawa
 
Cilium - Bringing the BPF Revolution to Kubernetes Networking and Security
Cilium - Bringing the BPF Revolution to Kubernetes Networking and SecurityCilium - Bringing the BPF Revolution to Kubernetes Networking and Security
Cilium - Bringing the BPF Revolution to Kubernetes Networking and SecurityThomas Graf
 
[오픈소스컨설팅]RHEL7/CentOS7 Pacemaker기반-HA시스템구성-v1.0
[오픈소스컨설팅]RHEL7/CentOS7 Pacemaker기반-HA시스템구성-v1.0[오픈소스컨설팅]RHEL7/CentOS7 Pacemaker기반-HA시스템구성-v1.0
[오픈소스컨설팅]RHEL7/CentOS7 Pacemaker기반-HA시스템구성-v1.0Ji-Woong Choi
 

Was ist angesagt? (20)

OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
OpenStack超入門シリーズ いまさら聞けないNeutronの使い方OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
 
日本OpenStackユーザ会 第37回勉強会
日本OpenStackユーザ会 第37回勉強会日本OpenStackユーザ会 第37回勉強会
日本OpenStackユーザ会 第37回勉強会
 
最近のOpenStackを振り返ってみよう
最近のOpenStackを振り返ってみよう最近のOpenStackを振り返ってみよう
最近のOpenStackを振り返ってみよう
 
BPF & Cilium - Turning Linux into a Microservices-aware Operating System
BPF  & Cilium - Turning Linux into a Microservices-aware Operating SystemBPF  & Cilium - Turning Linux into a Microservices-aware Operating System
BPF & Cilium - Turning Linux into a Microservices-aware Operating System
 
Rootless Kubernetes
Rootless KubernetesRootless Kubernetes
Rootless Kubernetes
 
CloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and TroubleshootingCloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and Troubleshooting
 
Linux Networking Explained
Linux Networking ExplainedLinux Networking Explained
Linux Networking Explained
 
Volume Encryption In CloudStack
Volume Encryption In CloudStackVolume Encryption In CloudStack
Volume Encryption In CloudStack
 
Openstack Swift overview
Openstack Swift overviewOpenstack Swift overview
Openstack Swift overview
 
Kolla talk at OpenStack Summit 2017 in Sydney
Kolla talk at OpenStack Summit 2017 in SydneyKolla talk at OpenStack Summit 2017 in Sydney
Kolla talk at OpenStack Summit 2017 in Sydney
 
CETH for XDP [Linux Meetup Santa Clara | July 2016]
CETH for XDP [Linux Meetup Santa Clara | July 2016] CETH for XDP [Linux Meetup Santa Clara | July 2016]
CETH for XDP [Linux Meetup Santa Clara | July 2016]
 
오픈스택: 구석구석 파헤쳐보기
오픈스택: 구석구석 파헤쳐보기오픈스택: 구석구석 파헤쳐보기
오픈스택: 구석구석 파헤쳐보기
 
NFVアプリケーションをOpenStack上で動かす為に - OpenStack最新情報セミナー 2017年7月
NFVアプリケーションをOpenStack上で動かす為に - OpenStack最新情報セミナー 2017年7月NFVアプリケーションをOpenStack上で動かす為に - OpenStack最新情報セミナー 2017年7月
NFVアプリケーションをOpenStack上で動かす為に - OpenStack最新情報セミナー 2017年7月
 
Deploying IPv6 on OpenStack
Deploying IPv6 on OpenStackDeploying IPv6 on OpenStack
Deploying IPv6 on OpenStack
 
OpenStack勉強会
OpenStack勉強会OpenStack勉強会
OpenStack勉強会
 
plotnetcfg入門 | Introduction to plotnetcfg
plotnetcfg入門 | Introduction to plotnetcfgplotnetcfg入門 | Introduction to plotnetcfg
plotnetcfg入門 | Introduction to plotnetcfg
 
Kubernetes Basics
Kubernetes BasicsKubernetes Basics
Kubernetes Basics
 
Cilium - Bringing the BPF Revolution to Kubernetes Networking and Security
Cilium - Bringing the BPF Revolution to Kubernetes Networking and SecurityCilium - Bringing the BPF Revolution to Kubernetes Networking and Security
Cilium - Bringing the BPF Revolution to Kubernetes Networking and Security
 
Introduction to kubernetes
Introduction to kubernetesIntroduction to kubernetes
Introduction to kubernetes
 
[오픈소스컨설팅]RHEL7/CentOS7 Pacemaker기반-HA시스템구성-v1.0
[오픈소스컨설팅]RHEL7/CentOS7 Pacemaker기반-HA시스템구성-v1.0[오픈소스컨설팅]RHEL7/CentOS7 Pacemaker기반-HA시스템구성-v1.0
[오픈소스컨설팅]RHEL7/CentOS7 Pacemaker기반-HA시스템구성-v1.0
 

Andere mochten auch

OpenStackクラウド基盤構築ハンズオンセミナー 第2日:ハンズオンNo1
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:ハンズオンNo1OpenStackクラウド基盤構築ハンズオンセミナー 第2日:ハンズオンNo1
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:ハンズオンNo1Etsuji Nakai
 
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo2
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo2OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo2
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo2Etsuji Nakai
 
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo1
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo1OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo1
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo1Etsuji Nakai
 
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No1
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No1OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No1
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No1Etsuji Nakai
 
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No1
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No1OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No1
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No1Etsuji Nakai
 
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No2
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No2OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No2
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No2Etsuji Nakai
 
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No2
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No2OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No2
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No2Etsuji Nakai
 
DevOpsにおける組織に固有の事情を どのように整理するべきか
DevOpsにおける組織に固有の事情を どのように整理するべきかDevOpsにおける組織に固有の事情を どのように整理するべきか
DevOpsにおける組織に固有の事情を どのように整理するべきかEtsuji Nakai
 
Spannerに関する技術メモ
Spannerに関する技術メモSpannerに関する技術メモ
Spannerに関する技術メモEtsuji Nakai
 
Googleのインフラ技術から考える理想のDevOps
Googleのインフラ技術から考える理想のDevOpsGoogleのインフラ技術から考える理想のDevOps
Googleのインフラ技術から考える理想のDevOpsEtsuji Nakai
 
エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第一部 OpenStack入門
エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第一部 OpenStack入門エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第一部 OpenStack入門
エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第一部 OpenStack入門Etsuji Nakai
 
エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第二部 OpenStackの内部構造
エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第二部 OpenStackの内部構造エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第二部 OpenStackの内部構造
エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第二部 OpenStackの内部構造Etsuji Nakai
 
RDOで体験! OpenStackの基本機能
RDOで体験! OpenStackの基本機能RDOで体験! OpenStackの基本機能
RDOで体験! OpenStackの基本機能Etsuji Nakai
 
Linux女子部 iptables復習編
Linux女子部 iptables復習編Linux女子部 iptables復習編
Linux女子部 iptables復習編Etsuji Nakai
 
Okinawa Open Days 2014 OpenStackハンズオンセミナー / OpenStackの機能概要
Okinawa Open Days 2014 OpenStackハンズオンセミナー / OpenStackの機能概要Okinawa Open Days 2014 OpenStackハンズオンセミナー / OpenStackの機能概要
Okinawa Open Days 2014 OpenStackハンズオンセミナー / OpenStackの機能概要Etsuji Nakai
 
H26第1回 沖縄オープンラボラトリ・ハンズオンセミナー:OpenStack入門
H26第1回 沖縄オープンラボラトリ・ハンズオンセミナー:OpenStack入門H26第1回 沖縄オープンラボラトリ・ハンズオンセミナー:OpenStack入門
H26第1回 沖縄オープンラボラトリ・ハンズオンセミナー:OpenStack入門Etsuji Nakai
 
OpenStackをさらに”使う”技術 - OpenStack&Docker活用テクニック
OpenStackをさらに”使う”技術 - OpenStack&Docker活用テクニックOpenStackをさらに”使う”技術 - OpenStack&Docker活用テクニック
OpenStackをさらに”使う”技術 - OpenStack&Docker活用テクニックEtsuji Nakai
 
Your first TensorFlow programming with Jupyter
Your first TensorFlow programming with JupyterYour first TensorFlow programming with Jupyter
Your first TensorFlow programming with JupyterEtsuji Nakai
 
分散ストレージソフトウェアCeph・アーキテクチャー概要
分散ストレージソフトウェアCeph・アーキテクチャー概要分散ストレージソフトウェアCeph・アーキテクチャー概要
分散ストレージソフトウェアCeph・アーキテクチャー概要Etsuji Nakai
 
Machine Learning Basics for Web Application Developers
Machine Learning Basics for Web Application DevelopersMachine Learning Basics for Web Application Developers
Machine Learning Basics for Web Application DevelopersEtsuji Nakai
 

Andere mochten auch (20)

OpenStackクラウド基盤構築ハンズオンセミナー 第2日:ハンズオンNo1
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:ハンズオンNo1OpenStackクラウド基盤構築ハンズオンセミナー 第2日:ハンズオンNo1
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:ハンズオンNo1
 
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo2
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo2OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo2
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo2
 
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo1
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo1OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo1
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:ハンズオンNo1
 
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No1
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No1OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No1
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No1
 
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No1
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No1OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No1
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No1
 
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No2
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No2OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No2
OpenStackクラウド基盤構築ハンズオンセミナー 第1日:講義No2
 
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No2
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No2OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No2
OpenStackクラウド基盤構築ハンズオンセミナー 第2日:講義No2
 
DevOpsにおける組織に固有の事情を どのように整理するべきか
DevOpsにおける組織に固有の事情を どのように整理するべきかDevOpsにおける組織に固有の事情を どのように整理するべきか
DevOpsにおける組織に固有の事情を どのように整理するべきか
 
Spannerに関する技術メモ
Spannerに関する技術メモSpannerに関する技術メモ
Spannerに関する技術メモ
 
Googleのインフラ技術から考える理想のDevOps
Googleのインフラ技術から考える理想のDevOpsGoogleのインフラ技術から考える理想のDevOps
Googleのインフラ技術から考える理想のDevOps
 
エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第一部 OpenStack入門
エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第一部 OpenStack入門エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第一部 OpenStack入門
エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第一部 OpenStack入門
 
エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第二部 OpenStackの内部構造
エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第二部 OpenStackの内部構造エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第二部 OpenStackの内部構造
エンジニア向け夏期特別講座 〜 Red Hat OpenStack徹底解説! 第二部 OpenStackの内部構造
 
RDOで体験! OpenStackの基本機能
RDOで体験! OpenStackの基本機能RDOで体験! OpenStackの基本機能
RDOで体験! OpenStackの基本機能
 
Linux女子部 iptables復習編
Linux女子部 iptables復習編Linux女子部 iptables復習編
Linux女子部 iptables復習編
 
Okinawa Open Days 2014 OpenStackハンズオンセミナー / OpenStackの機能概要
Okinawa Open Days 2014 OpenStackハンズオンセミナー / OpenStackの機能概要Okinawa Open Days 2014 OpenStackハンズオンセミナー / OpenStackの機能概要
Okinawa Open Days 2014 OpenStackハンズオンセミナー / OpenStackの機能概要
 
H26第1回 沖縄オープンラボラトリ・ハンズオンセミナー:OpenStack入門
H26第1回 沖縄オープンラボラトリ・ハンズオンセミナー:OpenStack入門H26第1回 沖縄オープンラボラトリ・ハンズオンセミナー:OpenStack入門
H26第1回 沖縄オープンラボラトリ・ハンズオンセミナー:OpenStack入門
 
OpenStackをさらに”使う”技術 - OpenStack&Docker活用テクニック
OpenStackをさらに”使う”技術 - OpenStack&Docker活用テクニックOpenStackをさらに”使う”技術 - OpenStack&Docker活用テクニック
OpenStackをさらに”使う”技術 - OpenStack&Docker活用テクニック
 
Your first TensorFlow programming with Jupyter
Your first TensorFlow programming with JupyterYour first TensorFlow programming with Jupyter
Your first TensorFlow programming with Jupyter
 
分散ストレージソフトウェアCeph・アーキテクチャー概要
分散ストレージソフトウェアCeph・アーキテクチャー概要分散ストレージソフトウェアCeph・アーキテクチャー概要
分散ストレージソフトウェアCeph・アーキテクチャー概要
 
Machine Learning Basics for Web Application Developers
Machine Learning Basics for Web Application DevelopersMachine Learning Basics for Web Application Developers
Machine Learning Basics for Web Application Developers
 

Ähnlich wie Inside Out: OpenStack's Internal Architecture

Survey of open source cloud architectures
Survey of open source cloud architecturesSurvey of open source cloud architectures
Survey of open source cloud architecturesabhinav vedanbhatla
 
Using VPP and SRIO-V with Clear Containers
Using VPP and SRIO-V with Clear ContainersUsing VPP and SRIO-V with Clear Containers
Using VPP and SRIO-V with Clear ContainersMichelle Holley
 
Nano Server - the future of Windows Server - Thomas Maurer
Nano Server - the future of Windows Server - Thomas MaurerNano Server - the future of Windows Server - Thomas Maurer
Nano Server - the future of Windows Server - Thomas MaurerITCamp
 
OpenStack Neutron Havana Overview - Oct 2013
OpenStack Neutron Havana Overview - Oct 2013OpenStack Neutron Havana Overview - Oct 2013
OpenStack Neutron Havana Overview - Oct 2013Edgar Magana
 
2012-03-15 What's New at Red Hat
2012-03-15 What's New at Red Hat2012-03-15 What's New at Red Hat
2012-03-15 What's New at Red HatShawn Wells
 
Openstack_administration
Openstack_administrationOpenstack_administration
Openstack_administrationAshish Sharma
 
Openstack Networking Internals - first part
Openstack Networking Internals - first partOpenstack Networking Internals - first part
Openstack Networking Internals - first partlilliput12
 
The Lies We Tell Our Code (#seascale 2015 04-22)
The Lies We Tell Our Code (#seascale 2015 04-22)The Lies We Tell Our Code (#seascale 2015 04-22)
The Lies We Tell Our Code (#seascale 2015 04-22)Casey Bisson
 
Cloud computing and OpenStack
Cloud computing and OpenStackCloud computing and OpenStack
Cloud computing and OpenStackEdgar Magana
 
The lies we tell our code, LinuxCon/CloudOpen 2015-08-18
The lies we tell our code, LinuxCon/CloudOpen 2015-08-18The lies we tell our code, LinuxCon/CloudOpen 2015-08-18
The lies we tell our code, LinuxCon/CloudOpen 2015-08-18Casey Bisson
 
2014/09/02 Cisco UCS HPC @ ANL
2014/09/02 Cisco UCS HPC @ ANL2014/09/02 Cisco UCS HPC @ ANL
2014/09/02 Cisco UCS HPC @ ANLdgoodell
 
Getting started with open stack
Getting started with open stackGetting started with open stack
Getting started with open stackDan Radez
 
26.1.7 lab snort and firewall rules
26.1.7 lab   snort and firewall rules26.1.7 lab   snort and firewall rules
26.1.7 lab snort and firewall rulesFreddy Buenaño
 
What's new in open stack juno (pnw os meetup)
What's new in open stack juno (pnw os meetup)What's new in open stack juno (pnw os meetup)
What's new in open stack juno (pnw os meetup)aedocw
 
tack Deployment in the Enterprise
tack Deployment in the Enterprisetack Deployment in the Enterprise
tack Deployment in the EnterpriseCisco Canada
 
OpenShift 4 installation
OpenShift 4 installationOpenShift 4 installation
OpenShift 4 installationRobert Bohne
 

Ähnlich wie Inside Out: OpenStack's Internal Architecture (20)

Survey of open source cloud architectures
Survey of open source cloud architecturesSurvey of open source cloud architectures
Survey of open source cloud architectures
 
ITE7_Chp9.pptx
ITE7_Chp9.pptxITE7_Chp9.pptx
ITE7_Chp9.pptx
 
Lecture 1.pptx
Lecture 1.pptxLecture 1.pptx
Lecture 1.pptx
 
Using VPP and SRIO-V with Clear Containers
Using VPP and SRIO-V with Clear ContainersUsing VPP and SRIO-V with Clear Containers
Using VPP and SRIO-V with Clear Containers
 
Nano Server - the future of Windows Server - Thomas Maurer
Nano Server - the future of Windows Server - Thomas MaurerNano Server - the future of Windows Server - Thomas Maurer
Nano Server - the future of Windows Server - Thomas Maurer
 
OpenStack Neutron Havana Overview - Oct 2013
OpenStack Neutron Havana Overview - Oct 2013OpenStack Neutron Havana Overview - Oct 2013
OpenStack Neutron Havana Overview - Oct 2013
 
2012-03-15 What's New at Red Hat
2012-03-15 What's New at Red Hat2012-03-15 What's New at Red Hat
2012-03-15 What's New at Red Hat
 
Openstack_administration
Openstack_administrationOpenstack_administration
Openstack_administration
 
Openstack Networking Internals - first part
Openstack Networking Internals - first partOpenstack Networking Internals - first part
Openstack Networking Internals - first part
 
The Lies We Tell Our Code (#seascale 2015 04-22)
The Lies We Tell Our Code (#seascale 2015 04-22)The Lies We Tell Our Code (#seascale 2015 04-22)
The Lies We Tell Our Code (#seascale 2015 04-22)
 
Cloud computing and OpenStack
Cloud computing and OpenStackCloud computing and OpenStack
Cloud computing and OpenStack
 
The lies we tell our code, LinuxCon/CloudOpen 2015-08-18
The lies we tell our code, LinuxCon/CloudOpen 2015-08-18The lies we tell our code, LinuxCon/CloudOpen 2015-08-18
The lies we tell our code, LinuxCon/CloudOpen 2015-08-18
 
Open stack wtf_(1)
Open stack  wtf_(1)Open stack  wtf_(1)
Open stack wtf_(1)
 
2014/09/02 Cisco UCS HPC @ ANL
2014/09/02 Cisco UCS HPC @ ANL2014/09/02 Cisco UCS HPC @ ANL
2014/09/02 Cisco UCS HPC @ ANL
 
Getting started with open stack
Getting started with open stackGetting started with open stack
Getting started with open stack
 
26.1.7 lab snort and firewall rules
26.1.7 lab   snort and firewall rules26.1.7 lab   snort and firewall rules
26.1.7 lab snort and firewall rules
 
What's new in open stack juno (pnw os meetup)
What's new in open stack juno (pnw os meetup)What's new in open stack juno (pnw os meetup)
What's new in open stack juno (pnw os meetup)
 
tack Deployment in the Enterprise
tack Deployment in the Enterprisetack Deployment in the Enterprise
tack Deployment in the Enterprise
 
OpenShift 4 installation
OpenShift 4 installationOpenShift 4 installation
OpenShift 4 installation
 
Fiware cloud developers week brussels
Fiware cloud developers week brusselsFiware cloud developers week brussels
Fiware cloud developers week brussels
 

Mehr von Etsuji Nakai

「ITエンジニアリングの本質」を考える
「ITエンジニアリングの本質」を考える「ITエンジニアリングの本質」を考える
「ITエンジニアリングの本質」を考えるEtsuji Nakai
 
Googleのインフラ技術に見る基盤標準化とDevOpsの真実
Googleのインフラ技術に見る基盤標準化とDevOpsの真実Googleのインフラ技術に見る基盤標準化とDevOpsの真実
Googleのインフラ技術に見る基盤標準化とDevOpsの真実Etsuji Nakai
 
Introducton to Convolutional Nerural Network with TensorFlow
Introducton to Convolutional Nerural Network with TensorFlowIntroducton to Convolutional Nerural Network with TensorFlow
Introducton to Convolutional Nerural Network with TensorFlowEtsuji Nakai
 
Googleにおける機械学習の活用とクラウドサービス
Googleにおける機械学習の活用とクラウドサービスGoogleにおける機械学習の活用とクラウドサービス
Googleにおける機械学習の活用とクラウドサービスEtsuji Nakai
 
A Brief History of My English Learning
A Brief History of My English LearningA Brief History of My English Learning
A Brief History of My English LearningEtsuji Nakai
 
TensorFlowプログラミングと分類アルゴリズムの基礎
TensorFlowプログラミングと分類アルゴリズムの基礎TensorFlowプログラミングと分類アルゴリズムの基礎
TensorFlowプログラミングと分類アルゴリズムの基礎Etsuji Nakai
 
TensorFlowによるニューラルネットワーク入門
TensorFlowによるニューラルネットワーク入門TensorFlowによるニューラルネットワーク入門
TensorFlowによるニューラルネットワーク入門Etsuji Nakai
 
Using Kubernetes on Google Container Engine
Using Kubernetes on Google Container EngineUsing Kubernetes on Google Container Engine
Using Kubernetes on Google Container EngineEtsuji Nakai
 
Lecture note on PRML 8.2
Lecture note on PRML 8.2Lecture note on PRML 8.2
Lecture note on PRML 8.2Etsuji Nakai
 
Deep Q-Network for beginners
Deep Q-Network for beginnersDeep Q-Network for beginners
Deep Q-Network for beginnersEtsuji Nakai
 
TensorFlowで学ぶDQN
TensorFlowで学ぶDQNTensorFlowで学ぶDQN
TensorFlowで学ぶDQNEtsuji Nakai
 
インタークラウドを実現する技術 〜 デファクトスタンダードからの視点 〜
インタークラウドを実現する技術 〜 デファクトスタンダードからの視点 〜インタークラウドを実現する技術 〜 デファクトスタンダードからの視点 〜
インタークラウドを実現する技術 〜 デファクトスタンダードからの視点 〜Etsuji Nakai
 
Exploring the Philosophy behind Docker/Kubernetes/OpenShift
Exploring the Philosophy behind Docker/Kubernetes/OpenShiftExploring the Philosophy behind Docker/Kubernetes/OpenShift
Exploring the Philosophy behind Docker/Kubernetes/OpenShiftEtsuji Nakai
 
「TensorFlow Tutorialの数学的背景」 クイックツアー(パート1)
「TensorFlow Tutorialの数学的背景」 クイックツアー(パート1)「TensorFlow Tutorialの数学的背景」 クイックツアー(パート1)
「TensorFlow Tutorialの数学的背景」 クイックツアー(パート1)Etsuji Nakai
 
Docker活用パターンの整理 ― どう組み合わせるのが正解?!
Docker活用パターンの整理 ― どう組み合わせるのが正解?!Docker活用パターンの整理 ― どう組み合わせるのが正解?!
Docker活用パターンの整理 ― どう組み合わせるのが正解?!Etsuji Nakai
 
Open Shift v3 主要機能と内部構造のご紹介
Open Shift v3 主要機能と内部構造のご紹介Open Shift v3 主要機能と内部構造のご紹介
Open Shift v3 主要機能と内部構造のご紹介Etsuji Nakai
 
Docker with RHEL7 技術勉強会
Docker with RHEL7 技術勉強会Docker with RHEL7 技術勉強会
Docker with RHEL7 技術勉強会Etsuji Nakai
 

Mehr von Etsuji Nakai (20)

PRML11.2-11.3
PRML11.2-11.3PRML11.2-11.3
PRML11.2-11.3
 
「ITエンジニアリングの本質」を考える
「ITエンジニアリングの本質」を考える「ITエンジニアリングの本質」を考える
「ITエンジニアリングの本質」を考える
 
Googleのインフラ技術に見る基盤標準化とDevOpsの真実
Googleのインフラ技術に見る基盤標準化とDevOpsの真実Googleのインフラ技術に見る基盤標準化とDevOpsの真実
Googleのインフラ技術に見る基盤標準化とDevOpsの真実
 
Introducton to Convolutional Nerural Network with TensorFlow
Introducton to Convolutional Nerural Network with TensorFlowIntroducton to Convolutional Nerural Network with TensorFlow
Introducton to Convolutional Nerural Network with TensorFlow
 
Googleにおける機械学習の活用とクラウドサービス
Googleにおける機械学習の活用とクラウドサービスGoogleにおける機械学習の活用とクラウドサービス
Googleにおける機械学習の活用とクラウドサービス
 
A Brief History of My English Learning
A Brief History of My English LearningA Brief History of My English Learning
A Brief History of My English Learning
 
TensorFlowプログラミングと分類アルゴリズムの基礎
TensorFlowプログラミングと分類アルゴリズムの基礎TensorFlowプログラミングと分類アルゴリズムの基礎
TensorFlowプログラミングと分類アルゴリズムの基礎
 
TensorFlowによるニューラルネットワーク入門
TensorFlowによるニューラルネットワーク入門TensorFlowによるニューラルネットワーク入門
TensorFlowによるニューラルネットワーク入門
 
Using Kubernetes on Google Container Engine
Using Kubernetes on Google Container EngineUsing Kubernetes on Google Container Engine
Using Kubernetes on Google Container Engine
 
Lecture note on PRML 8.2
Lecture note on PRML 8.2Lecture note on PRML 8.2
Lecture note on PRML 8.2
 
Deep Q-Network for beginners
Deep Q-Network for beginnersDeep Q-Network for beginners
Deep Q-Network for beginners
 
Life with jupyter
Life with jupyterLife with jupyter
Life with jupyter
 
TensorFlowで学ぶDQN
TensorFlowで学ぶDQNTensorFlowで学ぶDQN
TensorFlowで学ぶDQN
 
PRML7.2
PRML7.2PRML7.2
PRML7.2
 
インタークラウドを実現する技術 〜 デファクトスタンダードからの視点 〜
インタークラウドを実現する技術 〜 デファクトスタンダードからの視点 〜インタークラウドを実現する技術 〜 デファクトスタンダードからの視点 〜
インタークラウドを実現する技術 〜 デファクトスタンダードからの視点 〜
 
Exploring the Philosophy behind Docker/Kubernetes/OpenShift
Exploring the Philosophy behind Docker/Kubernetes/OpenShiftExploring the Philosophy behind Docker/Kubernetes/OpenShift
Exploring the Philosophy behind Docker/Kubernetes/OpenShift
 
「TensorFlow Tutorialの数学的背景」 クイックツアー(パート1)
「TensorFlow Tutorialの数学的背景」 クイックツアー(パート1)「TensorFlow Tutorialの数学的背景」 クイックツアー(パート1)
「TensorFlow Tutorialの数学的背景」 クイックツアー(パート1)
 
Docker活用パターンの整理 ― どう組み合わせるのが正解?!
Docker活用パターンの整理 ― どう組み合わせるのが正解?!Docker活用パターンの整理 ― どう組み合わせるのが正解?!
Docker活用パターンの整理 ― どう組み合わせるのが正解?!
 
Open Shift v3 主要機能と内部構造のご紹介
Open Shift v3 主要機能と内部構造のご紹介Open Shift v3 主要機能と内部構造のご紹介
Open Shift v3 主要機能と内部構造のご紹介
 
Docker with RHEL7 技術勉強会
Docker with RHEL7 技術勉強会Docker with RHEL7 技術勉強会
Docker with RHEL7 技術勉強会
 

Kürzlich hochgeladen

Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 

Kürzlich hochgeladen (20)

Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 

Inside Out: OpenStack's Internal Architecture

  • 1. OpenStack: Inside Out Etsuji Nakai Senior Solution Architect Red Hat ver1.0 2014/02/22 1 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 2. $ who am i ■ Etsuji Nakai - Senior solution architect and Cloud evangelist at Red Hat. - The author of “Professional Linux Systems” series. ● Available in Japanese/Korean. Translation offering from publishers are welcomed ;-) Self-study Linux Deploy and Manage by yourself Professional Linux Systems Technology for Next Decade Professional Linux Systems Deployment and Management Professional Linux Systems Network Management 2 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 3. Contents ■ Overview of OpenStack ■ Major components of OpenStack ■ Internal architecture of Nova and Cinder ■ Architecture overview of Neutron ■ Internal architecture of LinuxBridge plugin ■ Internal architecture of Open vSwitch plugin ■ Configuration steps of virtual network Note: Use of RDO (Grizzly) is assumed in this document. 3 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 4. Overview of OpenStack 4 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 5. Computing resources in OpenStack cloud ■ The end-users of OpenStack can create and configure the following computing resources in their private tenants through web console and/or APIs. OpenStack User External Network - Virtual Network - VM Instances - Block volumes ■ Each user belongs to some projects. - Users in the same project shares the common computing resources in their project environment. Project Environment Virtual Router Virtual Switches - Each project owns (virtual) computing resources which are independent of other projects. OS User Data VM Instances Block volumes 5 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 6. Logical view of OpenStack virtual network ■ Each tenant has its own virtual router which works like "the broadband router in your home network." - Tenant users add virtual switches behind the router and assign private subnet addresses to them. It's possible to use overlapping subnets with other tenants. ■ When launching an instance, the end-user selects virtual switches to connect it. - The number of virtual NICs of the instance corresponds to the number of switches to connect. Private IPs are assigned via DHCP. External network Virtual router for tenant A Virtual switch 192.168.101.0/24 Virtual router for tenant B Virtual switch 192.168.102.0/24 6 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 7. Private IP and Floating IP ■ When accessing form the external network, "Floating IP" is attached to the VM instance. - A range of IP addresses of the external network which can be used as Floating IP are pooled and distributed to each tenant in advance. - Floating IP is NAT-ed to the corresponding Private IP on the virtual router. - Accessing from VM instance to the external network is possible without assigning Floating IP. IP masquerade feature of the virtual router is used in this case. Connecting from the external network with Floating IP Floating IP Private IP Web Server Connecting between VM instances with Private IP Private IP DB Server 7 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 8. VM instance creation ■ When launching a new VM instance, the following options should be specified. - Instance type (flavor) External network - Template image - Virtual network - Security group - Key pair Format Description raw Flat image file AMI/AKI/ARI Used with Amazon EC2 qcow2 Used with Linux KVM VDI Used with VirtualBox VMDK Used with VMware VHD Used with Hyper-V Security Group Supported import image format Template image Download OS It's possible to connect multiple networks. 8 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 9. Key pair authentication for SSH connection ■ A user registers his/her public key in advance. It's injected to guest OS when launching a new instance. - Key pairs are registered for each user. They are not shared with multiple users. VM instance (3) Authenticate with secret key. Secret key Public key (2) Public key is injected to guest OS. (1) Register public key in advance. User information database 9 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 10. Instance types and corresponding disk areas ■ The following is the list of instance types created by default. - The root disk is extended to the specified size after being copied from the template image (except m1.tiny). Instance type (flavor) Memory root disk temp disk swap disk m1.tiny 1 512MB 0GB 0 0 m1.small 1 2GB 20GB 0 0 m1.medium 2 4GB 40GB 0 0 m1.large 4 4GB 80GB 0 0 m1.xlarge ■ vCPU 8 8GB 160GB 0 0 The admin users can define new instance types. - The following is an example of using temp disk and swap disk. NAME vda └─vda1 vdb vdc MAJ:MIN RM SIZE RO TYPE MOUNTPOINT 252:0 0 20G 0 disk 252:1 0 20G 0 part / 252:16 0 5G 0 disk /mnt 252:32 0 1G 0 disk [SWAP] root disk temp disk swap disk - Since these disk are discarded when the instance is destroyed, persistent data should be stored in different places, typically in block volumes. 10 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 11. Snapshot of VM instances ■ By taking snapshot of running instances, you can copy the root disk and reuse it as a template image. Template image Instance snapshot Launch an instance from a snapshot. Launch an instance from a template image. OS OS Create a snapshot which is a copy of the root disk. 11 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 12. Block volume as persistent data store ■ Block volumes remain undeleted after destroying a VM instance. It can be used as a persistent data store. OS OS User Data It can be re-attached to another instance. User Data (2) Attach to a running instance to store user data. (4) Create a new block volume from the snapshot. (1) Create a new block volume. (3) Create a snapshot 12 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 13. Boot from block volume ■ It's possible to copy a template image to a new block volume to create a bootable block volume. - When booting from block volume, the contents of guest OS remain undeleted even when the instance is destroyed. - You can create a snapshot of the bootable volume, and create a new bootable volume from it when launching a new instance. OS OS Boot an instance directly from block volume. Template image OS Create a block volume from a template image. Copy OS Create a snapshot. 13 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 14. Major components of OpenStack 14 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 15. Major components of OpenStack ■ OpenStack is a set of component modules for various services and functions. - Swift : Object store ● Amazon S3-like object storage - Nova : Virtual machine life cycle management - Glance : Virtual machine image catalog ● Actual images are stored in the backend storage, typically in Swift. - Cinder : Virtual disk volume ● Amazon EBS-like volume management - Keystone : Centralized authentication and service catalogue system - Neutron : Virtual network management API (formerly known as Quantum) ● Actual network provisioning is delegated to external plugin modules. - Horizon : Web based self-service portal 15 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 16. Modules work together through REST API ■ Modules work together through REST API calls and the message queue. - Operations can be automated with external programs through REST API. Client PC Public network Create virtual machines Retrieve template images Network Node Create virtual network Swift VM template images Glance Horizon Nova Scheduler Keystone QPID / MySQL Authentication service Message queue and backend RDB Neutron Nova Nova Nova Compute Compute Compute Start virtual machines Attache virtual disk volumes (iSCSI) Management network Cinder Disk Images 16 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 17. API request call ■ There are two cases when API requests are issued. - When the end-user sends a request call directly or via Horizon dashboard. - When some components send a request call to another component. Database MySQL Infrastructure data Keystone (User authentication) Messaging QPID Horizon (Dashboard) API call Web access Message delivery to agents Neutron (Virtual network) Cinder (Block volumes) Nova (VM instances) Glance (VM templates) Connecting to virtual switches Attaching block volumes Downloading template images 17 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 18. User authentication for API requests ■ You need to be authenticated before sending requests to APIs. - End-users/components obtain the "token" for the API operation from Keystone before sending requests to APIs. (Each component has its user ID representing it in Keystone.) - When obtaining the token, URL for the target API is also retrieved from Keystone. Endusers need to know only the URL for Keystone API in advance. Keystone (User authentication) Neutron (Virtual network) Cinder (Block volumes) Horizon (Dashboard) Nova (VM instances) Glance (VM templates) 18 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 19. Token mechanism of Keystone authentication ■ ■ Since OpenStack clients make many API calls to various components, authenticating with ID/password for every call is undesirable in terms of security and performance. Instead, the clients obtain the "token" as a "license" for API calls in advance, and send the token ID to the component to use. - The component receiving the request validates the token ID with Keystone before accepting the request. - The generated token is stored in Keystone for a defined period (default: 24hous). Clients can reuse it until it expires. They don't need to obtain a token for each request call. Keystone server Obtain the token (Authenticated with ID/password) The generated token is stored in Keystone. ID=yyyy Send back the token ID Client Send a request with the token ID Validate the token ID and check the client's role. 19 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 20. Command operations of Keystone (1) ■ When using standard command line tools of OpenStack, you specify user name, password, tenant and API's URL with the environment variables. - Keystone API has different URLs (port numbers) for admin users and general users. - You can also specify them with command line options. - The following is an example of keystone operation using the default admin user "admin". This is generated # cat keystonerc_admin export OS_USERNAME=admin by packstack under /root. export OS_TENANT_NAME=admin export OS_PASSWORD=714f1ab569a64a3b export OS_AUTH_URL=http://172.16.1.11:35357/v2.0/ export PS1='[u@h W(keystone_admin)]$ ' Port 35357 is used # . keystonerc_admin for admin users. # keystone user-list +----------------------------------+------------+---------+-------------------+ | id | name | enabled | email | +----------------------------------+------------+---------+-------------------+ | 589a800d70534655bfade5504958afd6 | admin | True | test@test.com | | 3c45a1f5a88d4c1d8fb07b51ed72cd55 | cinder | True | cinder@localhost | | f23d88041e5245ee8cc8b0a5c3ec3f6c | demo_admin | True | | | 44be5165fdf64bd5907d07aa1aaa5dab | demo_user | True | | | cd75770810634ed3a09d92b61aacf0a7 | glance | True | glance@localhost | | a38561ed906e48468cf1759918735c53 | nova | True | nova@localhost | | 157c8846521846e0abdd16895dc8f024 | quantum | True | quantum@localhost | +----------------------------------+------------+---------+-------------------+ 20 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 21. Command operations of Keystone (2) ■ The following is an example of showing registered API services and their URLs. - Command line tools for other components internally use this mechanism to retrieve the API of the target component. # keystone service-list +----------------------------------+----------+----------+----------------------------+ | id | name | type | description | +----------------------------------+----------+----------+----------------------------+ | 5ea55cbee90546d1abace7f71808ad73 | cinder | volume | Cinder Service | | e92e73a765be4beca9f12f5f5d9943e0 | glance | image | Openstack Image Service | | 3631d835081344eb873f1d0d5057314d | keystone | identity | OpenStack Identity Service | | 8db624ad713e440492aeccac6ab70a90 | nova | compute | Openstack Compute Service | | e9f02d3803ab44f1a369602010864a34 | nova_ec2 | ec2 | EC2 Service | | 5889a1e691584e539aa121ab31194cca | quantum | network | Quantum Networking Service | +----------------------------------+----------+----------+----------------------------+ # keystone endpoint-list +----------------------------------+-----------+------------------------------------------||-+----------------------------------+ | id | region | publicurl || | service_id | +----------------------------------+-----------+------------------------------------------||-+----------------------------------+ | 0e96a30d9ce742ecb0bf123eebf84ac0 | RegionOne | http://172.16.1.11:8774/v2/%(tenant_id)s || | 8db624ad713e440492aeccac6ab70a90 | | 928a38f18cc54040a0aa53bd3da99390 | RegionOne | http://172.16.1.11:9696/ || | 5889a1e691584e539aa121ab31194cca | | d46cebe4806b43c4b48499285713ac7a | RegionOne | http://172.16.1.11:9292 || | e92e73a765be4beca9f12f5f5d9943e0 | | ebdd4e61571945b7801554908caf5bae | RegionOne | http://172.16.1.11:8776/v1/%(tenant_id)s || | 5ea55cbee90546d1abace7f71808ad73 | | ebec661dd65b4d4bb12fe67c25b2c77a | RegionOne | http://172.16.1.11:5000/v2.0 || | 3631d835081344eb873f1d0d5057314d | | f569475b6d364a04837af6d6a577befe | RegionOne | http://172.16.1.11:8773/services/Cloud || | e9f02d3803ab44f1a369602010864a34 | +----------------------------------+-----------+------------------------------------------||-+----------------------------------+ ■ Each command line tool provides the "help" sub command to show the list of sub commands and their details. # keystone help # keystone help user-list <- Showing the list of all sub commands <- Showing the detail of "user-list" sub command 21 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 22. Template image registration with Glance (1) ■ You can register new template images with Glance. The registered images become available from Nova. Keystone (User authentication) Neutron (Virtual network) Cinder (Block volumes) Horizon (Dashboard) Nova (VM instances) Glance (VM templates) 22 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 23. Template image registration with Glance (2) ■ The following is an example of registering a new template image with the general user "demo_user". The image is downloaded from the specified URL. This file needs to be created manually. Port 5000 is used for general users. # cat keystonerc_demo_user export OS_USERNAME=demo_user export OS_TENANT_NAME=demo export OS_PASSWORD=passw0rd export OS_AUTH_URL=http://172.16.1.11:5000/v2.0/ export PS1='[u@h W(keystone_demouser)]$ ' # . keystonerc_demo_user # glance image-create --name "Fedora19" --disk-format qcow2 --container-format bare --is-public true --copy-from http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 # glance image-list +--------------------------------------+----------+-------------+------------------+-----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+----------+-------------+------------------+-----------+--------+ | 702d0c4e-b06c-4c15-85e5-9bb612eb6414 | Fedora19 | qcow2 | bare | 237371392 | active | +--------------------------------------+----------+-------------+------------------+-----------+--------+ 23 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 24. Virtual network operations with Neutron ■ Through Neutron API, end-users can create virtual network dedicated to their own tenants. - Details will be explained in " Configuration steps of virtual network." Keystone (User authentication) Neutron (Virtual network) Cinder (Block volumes) Horizon (Dashboard) Nova (VM instances) Glance (VM templates) The command name "quantum" has been replaced with "neutron" in Havana release. # . keystonerc_demo_user # quantum net-list +--------------------------------------+-------------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+-------------------------------------------------------+ | 843a1586-6082-4e9f-950f-d44daa83358c | private01 | 9888df89-a17d-4f4c-b427-f28cffe8fed2 192.168.101.0/24 | | d3c763f0-ebf0-4717-b3fc-cda69bcd1957 | private02 | 23b26d98-2277-4fb5-8895-3f42cde7e1fd 192.168.102.0/24 | | d8040897-44b0-46eb-9c51-149dfe351bbe | ext-network | 1b8604a4-f39d-49de-a97c-3e40117a7516 192.168.199.0/24 | +--------------------------------------+-------------+-------------------------------------------------------+ 24 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 25. VM instance creation with Nova ■ When Nova receives an instance creation request, it communicates with Glance and Neutron through API. - Through Glance API, it downloads the template image to the compute node. - Through Neutron API, it attaches the launched instance to the virtual network. Keystone (User authentication) Neutron (Virtual network) Horizon (Dashboard) Cinder (Block volumes) Nova (VM instances) Glance (VM templates) Connecting to virtual switches Downloading template images 25 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 26. Command operations to launch an instance (1) ■ The following shows how the end-user checks the necessary information before launching an instance # . keystonerc_demo_user # nova flavor-list +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | extra_specs | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+ | 1 | m1.tiny | 512 | 0 | 0 | | 1 | 1.0 | True | {} | | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | {} | | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | {} | | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | {} | | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | {} | +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+ # nova keypair-list +-------+-------------------------------------------------+ | Name | Fingerprint | +-------+-------------------------------------------------+ | mykey | 31:8c:0e:43:67:40:f6:17:a3:f8:3f:d5:73:8e:d0:30 | +-------+-------------------------------------------------+ Nova retrieves the image list through Glance API. # nova image-list +--------------------------------------+----------+--------+--------+ | ID | Name | Status | Server | Nova retrieves the network list +--------------------------------------+----------+--------+--------+ through Neutron API. | 702d0c4e-b06c-4c15-85e5-9bb612eb6414 | Fedora19 | ACTIVE | | +--------------------------------------+----------+--------+--------+ # nova net-list # nova secgroup-list +--------------------------------------+-------------+------+ +---------+-------------+ | ID | Label | CIDR | | Name | Description | +--------------------------------------+-------------+------+ +---------+-------------+ | 843a1586-6082-4e9f-950f-d44daa83358c | private01 | None | | default | default | | d3c763f0-ebf0-4717-b3fc-cda69bcd1957 | private02 | None | +---------+-------------+ | d8040897-44b0-46eb-9c51-149dfe351bbe | ext-network | None | +--------------------------------------+-------------+------+ 26 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 27. Command operations to launch an instance (2) ■ The following is to launch an instance using the information in the previous page. # nova boot --flavor m1.small --image Fedora19 --key-name mykey --security-groups default --nic net-id=843a1586-6082-4e9f-950f-d44daa83358c vm01 +-----------------------------+--------------------------------------+ | Property | Value | +-----------------------------+--------------------------------------+ | status | BUILD | | updated | 2013-11-22T06:22:52Z | | OS-EXT-STS:task_state | scheduling | | key_name | mykey | | image | Fedora19 | | hostId | | | OS-EXT-STS:vm_state | building | | flavor | m1.small | | id | f40c9b76-3891-4a5f-a62c-87021ba277ce | | security_groups | [{u'name': u'default'}] | | user_id | 2e57cd295e3f4659b151dd80f3a73468 | | name | vm01 | | adminPass | 5sUFyKhgovV6 | | tenant_id | 555b49dc8b6e4d92aa74103bfb656e70 | | created | 2013-11-22T06:22:51Z | | OS-DCF:diskConfig | MANUAL | | metadata | {} | ...snip... +-----------------------------+--------------------------------------+ # nova list +--------------------------------------+------+--------+-------------------------+ | ID | Name | Status | Networks | +--------------------------------------+------+--------+-------------------------+ | f40c9b76-3891-4a5f-a62c-87021ba277ce | vm01 | ACTIVE | private01=192.168.101.3 | +--------------------------------------+------+--------+-------------------------+ 27 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 28. Command operations to launch an instance (3) ■ You can specify the file with "--user-data" to use the customize script (user data). - The following is an example of launching an instance with customize script, and adding a floating IP. # cat hello.txt #!/bin/sh echo 'Hello, World!' > /etc/motd # nova boot --flavor m1.small --image Fedora19 --key-name mykey --security-groups default --nic net-id=843a1586-6082-4e9f-950f-d44daa83358c --user-data hello.txt vm01 # nova floating-ip-list +--------------+-------------+----------+-------------+ | Ip | Instance Id | Fixed Ip | Pool | +--------------+-------------+----------+-------------+ | 172.16.1.101 | None | None | ext-network | | 172.16.1.102 | None | None | ext-network | | 172.16.1.103 | None | None | ext-network | | 172.16.1.104 | None | None | ext-network | | 172.16.1.105 | None | None | ext-network | +--------------+-------------+----------+-------------+ # nova add-floating-ip vm01 172.16.1.101 # ssh -i ~/mykey.pem fedora@172.16.1.101 The authenticity of host '172.16.1.101 (172.16.1.101)' can't be established. RSA key fingerprint is b7:24:54:63:1f:02:33:4f:81:a7:47:90:c1:1b:78:5a. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.16.1.101' (RSA) to the list of known hosts. Hello, World! [fedora@vm01 ~]$ 28 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 29. Floating IP association with Neutron API ■ When adding a floating IP to an instance with multiple NICs, you need to use Neutron API to specify the NIC port to associate. - After identifying the port ID which corresponds to the private IP, associate the floating IP to the port ID. # nova boot --flavor m1.small --image Fedora19 --key-name mykey --security-groups default --nic net-id=843a1586-6082-4e9f-950f-d44daa83358c --nic net-id=d3c763f0-ebf0-4717-b3fc-cda69bcd1957 Vm01 # nova list +--------------------------------------+------+--------+--------------------------------------------------+ | ID | Name | Status | Networks | +--------------------------------------+------+--------+--------------------------------------------------+ | e8d0fa19-130f-4502-acfe-132962134846 | vm01 | ACTIVE | private01=192.168.101.3; private02=192.168.102.3 | +--------------------------------------+------+--------+--------------------------------------------------+ # quantum port-list +--------------------------------------+------+-------------------+------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+------------------------------------+ | 10c3cd17-78f5-443f-952e-1e3e427e477f | | fa:16:3e:37:7b:a6 | ... "ip_address": "192.168.102.3"} | | d0057651-e1e4-434c-a81d-c950b9c06333 | | fa:16:3e:e6:d9:4c | ... "ip_address": "192.168.101.3"} | +--------------------------------------+------+-------------------+------------------------------------+ # quantum floatingip-list +--------------------------------------+------------------+---------------------+---------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+---------+ | 06d24f23-c2cc-471f-a4e6-59cf00578141 | | 171.16.1.101 | | | 89b49a78-8fd7-461b-8fe2-fba4a341c8a2 | | 172.16.1.102 | | +--------------------------------------+------------------+---------------------+---------+ # quantum floatingip-associate 06d24f23-c2cc-471f-a4e6-59cf00578141 d0057651-e1e4-434c-a81d-c950b9c06333 29 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 30. Operations for key pairs and security groups ■ Security related operations such as creating/registering key pairs and defining security groups can be done though Nova API. - The following is to create a new key pair "key01" and save the private(secret) key in "~/.ssh/key01.pem". # nova keypair-add key01 > ~/.ssh/key01.pem # chmod 600 ~/.ssh/key01.pem - The following is to register the public key of an existing key pair as "key02". # nova keypair-add --pub-key ~/.ssh/id_rsa.pub key02 - The following is to create a new security group "group01" and allow access to TCP port 22. # nova secgroup-create group01 "My security group." # nova secgroup-add-rule group01 tcp 22 22 0.0.0.0/0 ■ Note that since security group is now under the control of Neutron, you'd better know commands to configure them through quantum (neutron) API, too. # quantum security-group-create group01 --description "My security group." # quantum security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --remote-ip-prefix "0.0.0.0/0" group01 30 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 31. Block volume creation with Cinder ■ Block volumes can be created/deleted/snapshot-ed through Cinder API. - When attaching/detaching block volumes to/from running instances, you need to send a request to Nova API. Then Nova works together with Cinder through API calls. Keystone (User authentication) Neutron (Virtual network) Cinder (Block volumes) Horizon (Dashboard) Nova (VM instances) Glance (VM templates) Attaching block volumes 31 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 32. Command operations for block volumes ■ The following is an example of creating a 5GB block volume and attaching/detaching to/from a running instance. # cinder create --display-name volume01 5 # cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | 78b4d23b-3b57-4a38-9f6e-10e5048170ef | available | volume01 | 5 | None | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ # nova volume-attach vm01 78b4d23b-3b57-4a38-9f6e-10e5048170ef auto +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | The device | serverId | f40c9b76-3891-4a5f-a62c-87021ba277ce | | id | 78b4d23b-3b57-4a38-9f6e-10e5048170ef | | volumeId | 78b4d23b-3b57-4a38-9f6e-10e5048170ef | +----------+--------------------------------------+ name seen from guest OS. # nova volume-detach vm01 78b4d23b-3b57-4a38-9f6e-10e5048170ef 32 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 33. Creating bootable volumes ■ You can create a bootable block volume by creating a new volume from a template image. - Using the bootable volume, you can boot an instance directly from the block volume. - The following is an example of creating a bootable volume from an existing template image and launching an instance with it. ("--image" option is ignored in the boot subcommand, but you need specify one as a dummy entry.) # cinder create --image-id 702d0c4e-b06c-4c15-85e5-9bb612eb6414 --display-name Fedora19-bootvol 5 Template image ID # cinder list +--------------------------------------+-----------+------------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------------------+------+-------------+----------+-------------+ | 78b4d23b-3b57-4a38-9f6e-10e5048170ef | available | volume01 | 5 | None | false | | | bdde9405-8be7-48d5-a879-35e37c97512f | available | Fedora19-bootvol | 5 | None | true | | +--------------------------------------+-----------+------------------+------+-------------+----------+-------------+ # nova boot --flavor m1.small --image Fedora19 --key-name mykey --security-groups default --nic net-id=843a1586-6082-4e9f-950f-d44daa83358c --block_device_mapping vda=bdde9405-8be7-48d5-a879-35e37c97512f:::0 vm02 Block volume ID Flag to delete the volume after destroying instance. (1=yes) # nova volume-list +----------||-----------+-----------+------------------+------+-------------+--------------------------------------+ | ID || | Status | Display Name | Size | Volume Type | Attached to | +----------||-----------+-----------+------------------+------+-------------+--------------------------------------+ | 78b4d23b-||e5048170ef | available | volume01 | 5 | None | | | bdde9405-||e37c97512f | in-use | Fedora19-bootvol | 5 | None | b4cb7edd-317f-44e9-97db-5a04c41a4510 | +----------||-----------+-----------+------------------+------+-------------+--------------------------------------+ 33 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 34. Internal services of Nova and Cinder 34 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 35. Internal services of Nova Controller node Nova API Provide REST API Compute node Driver for a specific hypervisor to be used Nova Compute Choose compute node to launch VM Compute Driver Libvirt Order to launch VM Nova Scheduler Launch VM VM instance Retrieve resource information Update resource information Nova Conductor VM instance qcow2 overlay image qcow2 overlay image Proxy service for DB access Database Glance /var/lib/nova/instances/<ID> Overlaying Download template image qcow2 base image Downloaded image is cached for a defined period. Communication via the messaging server /var/lib/nova/instances/_base 35 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 36. How messaging server works ■ The internal services and agents of one component (such as Nova) communicate through the messaging server. - The messaging server provides "topics" as channels of communication. The sender put a message in a specific topic. Then the receiver picks the message from topics which they have subscribed. - The messages in topics have a flag to specify the delivery model such as "all subscribers should receive" or "only one subscriber should receive." - Since multiple senders can put messages in the same topic, it realizes the M:N asynchronous communication. Receiving messages Messaging server service Topic A service service Sending messages Topic B service ・・ Services which have subscribed to topic A. 36 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 37. Features of qcow2 disk image ■ ■ qcow2 is a disk image format designed for virtual machines which has the following features. Dynamic block allocation - The real (physical) file size is smaller than its logical image size. The file grows as data is added. It's possible to extend the logical size, too. ■ Overlay mechanism - You can add an overlay file on top of the backing image. The overlay file contains only the additional changes from the backing image. - The backing image can be shared with multiple overlay files. This is useful to reduce the physical disk usage when a lot of virtual machines is launched with the same template image. ■ Multiple snapshots - By taking snapshots of the image, you can reproduce the previous contents of the image, or create a new image from the snapshot. 37 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 38. Operations on qcow2 disk image ■ qemu-img is a command tool to manipulate qcow2 images. Creating a image with 5GB logical size. # qemu-img create -f qcow2 baseimage.qcow2 5G Formatting 'baseimage.qcow2', fmt=qcow2 size=5368709120 encryption=off cluster_size=65536 lazy_refcounts=off Creating a overlay file with baseimg.qcow2 as a backing image. # qemu-img create -f qcow2 -b baseimage.qcow2 layerimage.qcow2 Formatting 'layerimage.qcow2', fmt=qcow2 size=5368709120 backing_file='baseimage.qcow2' encryption=off cluster_size=65536 lazy_refcounts=off # qemu-img info layerimage.qcow2 image: layerimage.qcow2 file format: qcow2 virtual size: 5.0G (5368709120 bytes) disk size: 196K cluster_size: 65536 backing file: baseimage.qcow2 Creating a snapshot. # qemu-img snapshot -c snap01 layerimage.qcow2 # qemu-img snapshot -l layerimage.qcow2 Snapshot list: ID TAG VM SIZE DATE VM CLOCK 1 snap01 0 2013-11-22 17:08:02 00:00:00.000 Creating a new image from a snapshot. # qemu-img convert -f qcow2 -O qcow2 -s snap01 layerimage.qcow2 copiedimage.qcow2 Reference:  https://access.redhat.com/site/documentation/ja-JP/Red_Hat_Enterprise_Linux/6/html-single/    Virtualization_Administration_Guide/index.html#sect-Virtualization-Tips_and_tricks-Using_qemu_img 38 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 39. Public key injection mechanism ■ ■ Nova Compute injects the public key into "/root/.ssh/authorized_keys" of the local disk image before launching the instance. Cloud-Init can also be used to setup public key authentication at the boot time as it can retrieve the public key through meta-data(*). - Because allowing root login is undesirable in many cases, you'd better configure Cloud-Init to create a general user and setup public key authentication for it. Retrieving the public key from meta-data. $ curl http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA5W2IynhVezp+DpN11xdsY/8NOqeF8r7eYqVteeWZSBfnYhKn 8D85JmByBQnJ7HrJIrdMvfTYwWxi+swfFlryG3A+oSll0tT71FLAWnAYz26ML3HccyJ7E2bD66BSditbDITK H3V66oN9c3rIEXZYQ3A+GEiA1cFD++R0FNKxyBOkjduycvksB5Nl9xb3k6z4uoZ7JQD5J14qnooM55Blmn2C C2/2KlapxMi0tgSdkdfnSSxbYvlBztGiF3M4ey7kyuWwhE2iPBwkV/OhANl3nwHidcNdBrAGC3u78aTtUEwZ tNUqrevVKM/yUfRRyPRNivuGOkvjTDUL/9BGquBX9Q== enakai@kakinoha (*) Especially, when booting from block volume, Nova Compute fails to inject the public key. Use of Cloud-Int is mandatroy in this case. 39 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 40. Block volume use cases and corresponding APIs OS It can be re-attached to another instance. OS User Data User Data (2) Attach to a running instance to store user data. (4) Create a new block volume from the snapshot. (1) Create a new block volume. (3) Create a snapshot Template image OS Create a block volume from a template image. ■ Cinder API - volume create/delete/list/show (create from snapshot, image) OS - snapshot create/delete/list/show ■ Nova API - volume attach/detach 40 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 41. How Nova and Cinder works together ■ ■ In typical configuration, block volumes are created as LUNs in iSCSI storage boxes. Cinder operates on the management interface of the storage through the corresponding driver. Nova Compute attaches it to the host Linux using the software initiator, then it's attached to the VM instance through KVM hypervisor. VM instance /dev/vdb Cinder Nova Compute Virtual disk Create LUNs Storage box Linux KVM /dev/sdX iSCSI LUN iSCSI SW Initiator iSCSI Target 41 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 42. Internal services of Cinder ■ Volume drivers handle the management interface of corresponding storage. - When using multiple types of storage, Cinder Scheduler choose the driver to be used based on the requested storage type. Provide REST API Controller node Cinder API Cinder-Volume Driver for a specific type of storage Create LUNs Storage box Volume Driver Cinder Scheduler Choose an appropriate volume driver Volume information Database LUN iSCSI connection Nova Compute Nova API Communication via the messaging server Provide REST API 42 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 43. Using LVM driver ■ Cinder provides the LVM driver as a reference implementation which uses Linux LVM instead of external storage boxes. - Snapshot feature is implemented with LVM snapshot where the delta volume has the same size as the base volume. Cinder VM instance /dev/vdb Virtual disk Linux KVM /dev/sdX Create logical volumes and export as iSCSI LUNs. Nova Compute iSCSI LUN VG: cinder-volumes LV iSCSI SW Target (tgtd) iSCSI SW Initiator 43 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 44. Using NFS driver ■ Cinder also provides the NFS driver which uses NFS server as a storage backend. - The driver simply mounts the NFS exported directly and create disk image files in it. Compute nodes use NFS mount to access the image files. Cinder VM instance /dev/vdb Nova Compute NFS mount Virtual disk NFS server Linux KVM ・・・ ・・・ NFS mount 44 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 45. Using GlusterFS driver ■ There is a driver for GlusterFS distributed filesystem, too. - Currently it uses FUSE mount mechanism. This will be replaced with more optimized mechanism (libgfapi) which bypasses the FUSE layer. VM instance Cinder Nova Compute FUSE mount /dev/vdb Virtual disk GlusterFS cluster Linux KVM ・・・ ・・・ FUSE mount 45 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 46. Architecture overview of Neutron 46 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 47. Logical view of Neutron's virtual network ■ Each tenant has its own virtual router which works like "the broadband router in your home network." - Tenant users add virtual switches behind the router and assign private subnet addresses to them. It's possible to use overlapping subnets with other tenants. ■ When launching an instance, the end-user selects virtual switches to connect it. - The number of virtual NICs of the instance corresponds to the number of switches to connect. Private IPs are assigned via DHCP. External network Virtual router for tenant A Virtual switch 192.168.101.0/24 Virtual router for tenant B Virtual switch 192.168.102.0/24 47 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 48. Plugin architecture of Neutron ■ The actual work of creating virtual network is done by plugin agents. - There are various plugins for Neutron including commercial products from third vendors. - OpenStack provides "LinuxBrdige plugin" and "Open vSwitch plugin" as a standard/reference implementation. Network controller Controller node Provide REST API Create virtual routers L2 Agent Create virtual L2 switches DHCP Agent Neutron service L3 Agent Assign private IP addresses Compute node L2 Agent Create virtual L2 switches Communication via the messaging server 48 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 49. Network configuration with standard plugin ■ The following shows the typical configuration using LinuxBridge plugin or Open vSwitch plugin. - L3 Agent on the network node provides the virtual router functions connecting the private and public network. ("eth0" of each node is used for accessing host Linux, not for VM instance communication.) - It's not possible to have multiple network nodes. Scalable network feature is under development today. Public network Private network eth0 eth1 eth2 eth0 L2 Agent Provide DHCP function for private networks eth1 L2 Agent DHCP Agent Create virtual 仮想スイッチ作成 L2 switches VM eth1 eth0 VM L2 Agent VM ・・・ VM L3 Agent Network node Provide virtual router function Compute node 49 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 50. Internal architecture of LinuxBridge plugin 50 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 51. Internal architecture of LinuxBridge plugin ■ This section describes how LinuxBridge plugin implements the virtual network in the drawing below as a concrete example. External network Virtual router Virtual L2 switch private01 vm01 Virtual L2 switch private02 vm02 vm03 51 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 52. Configuration inside compute node ■ Linux bridges are created for each virtual switch. Outside the compute node, the network traffic of each switch is separated with VLAN. Configured by Nova Compute vm01 IP eth0 vm02 IP eth0 vm03 IP IP eth1 eth0 IP is assigned from dnsmasq on network node. brqyyy brqxxxx private01 VLANs are created for each virtual L2 switch. Physical L2 switch for private network private02 eth1.102 eth1.101 eth1 Configured by L2 Agent VLAN101 VLAN102 52 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 53. Configuration inside network node To/From public network eth1 brqxxxx Conceptually, there exists a virtual router here. IP External GW IP Internal GW IP IP Configured by DHCP Agent dnsmasq is started for each subnet. ■ ■ Virtual router is implemented with Linux's packet forwarding feature. dnsmasq is used as a DHCP server for providing private IP addresses for each subnet. - IP address is assigned corresponding to a MAC addresses of virtual NIC. qg-VVV NAT and filtering is done by iptables. IP qr-WWW qr-YYY dnsmasq dnsmasq ns-XXX Configured by L3 Agent IP ns-ZZZ IP brqyyy brqxxxx private01 private02 eth1.102 eth1.101 eth2 Configured by L2 Agent To/From private network 53 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 54. Internal architecture of Open vSwitch plugin 54 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 55. What is Open vSwitch? ■ Open vSwitch is software to create virtual L2 switch on top of Linux. It supports many features comparable to physical L2 switch products. - Especially, since it supports the OpenFlow protocol which provides a fine-grained packet control feature, Open vSwitch is widely used for virtual network applications. Supported features of Open vSwitch(http://openvswitch.org/features/) ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Visibility into inter-VM communication via NetFlow, sFlow(R), IPFIX, SPAN, RSPAN, and GRE-tunneled mirrors LACP (IEEE 802.1AX-2008) Standard 802.1Q VLAN model with trunking BFD and 802.1ag link monitoring STP (IEEE 802.1D-1998) Fine-grained QoS control Support for HFSC qdisc Per VM interface traffic policing NIC bonding with source-MAC load balancing, active backup, and L4 hashing OpenFlow protocol support (including many extensions for virtualization) IPv6 support Multiple tunneling protocols (GRE, VXLAN, IPsec, GRE and VXLAN over IPsec) Remote configuration protocol with C and Python bindings Kernel and user-space forwarding engine options Multi-table forwarding pipeline with flow-caching engine Forwarding layer abstraction to ease porting to new software and hardware platforms 55 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 56. What is OpenFlow? ■ OpenFlow is a protocol to provide fine-grained control of packet forwarding from an external controller. - OpenFlow switches query the external controller about how received packets should be handled. - Since the programmability of controller software gives flexibility over packet operations, it suits to creating multi-tenant virtual network. For example, it can decide the forwarding port according to source/destination MAC addresses, modify VLAN tag in the header, etc. OpenFlow controller Controller instructs how packets should be handled through OpenFlow protocol. OpenFlow switches 56 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 57. Internal architecture of Open vSwitch plugin ■ This section describes how Open vSwitch plugin implements the virtual network in the drawing below as a concrete example. External network Tenant A Virtual router Tenant B Virtual router Virtual L2 switch projectA vm01 Virtual L2 switch project B vm02 vm03 vm04 57 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 58. Configuration inside compute node (1) ■ See the next page for explanation. Configured by Nova Compute IP vm01 IP eth0 qvoXXX vm02 IP eth0 qvoYYY eth0 qvoZZZ Port VLAN tag:1 "Internal VLAN" is assigned to each virtual L2 switch. vm03 IP vm04 eth0 qvoWWW Port VLAN tag:2 br-int int-br-priv phy-br-priv Configured by L2 Agent br-priv Translation between "Internal" and "External" VLAN - Internal VLAN1<->External VLAN101 - Internal VLAN2<->ExternalVLAN102 eth1 VLAN101 VLAN102 Open vSwitch 58 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 59. Configuration inside compute node (2) ■ Virtual NICs of VM instances are connected to the common "Integration switch (brint)". - Internal VLAN is assigned to the connected port according to the (logical) virtual L2 switch to be connected. ■ Connection to the physical L2 switch for the private network is done through the "Private switch (br-priv)". - External VLANs are assigned on the physical switch according to the (logical) virtual L2 switch. The translation between Internal and External VLAN is done with OpenFlow. ■ In addition to VLAN, other separation mechanisms such as GRE tunneling can be used over the physical network connection. - In the case of GRE tunneling, the translation between "Internal VLAN" and "GRE tunnel ID" is done with OpenFlow. 59 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 60. Configuration inside network node ■ To/From public network Since two virtual routers are configured, there are two paths of packet forwarding. eth1 IP IP qg-VVV IP tapXXX qg-CCC IP NAT and filtering is done by iptables. dnsmasq Configured by DHCP Agent Configured by L3 Agent br-ex IP qr-YYY dnsmasq IP qr-BBB ポートVLAN tag:1 tapAAA ポートVLAN tag:2 br-int int-br-priv Translation between "Internal" and "External" VLAN - Internal VLAN1<->External VLAN101 - Internal VLAN2<->ExternalVLAN102 Configured by L2 Agent phy-br-priv br-priv eth2 To/From private network 60 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 61. Overlapping subnet with network namespace ■ ■ When using multiple virtual routers, network node needs to have independent NAT/filtering configurations for each virtual router to allow the use of overlapping subnet among multiple tenants. This is done with Linux's network namespace feature which allows Linux to have multiple independent network configurations. The following is the steps to use network namespace. - Create a new namespacne. - Allocate network ports inside the namespace. (Both physical and logical ports can be used.) - Configure networks (port configuration, iptalbes configuration, etc.) inside the namespace. - Then the configuration is applied to network packets which go through the network port inside this namespace. ■ L3 Agent of LinuxBridge / Open vSwitch plugin uses network namespace. - It can be configured not to use namespace, but the use of overlapping subnet should be disabled in this case. 61 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 62. The overall picture of Open vSwitch plugin (1) ■ See the next page for details. External network Network namespace Open vSwitch Network node eth1 br-ex dnsmasq Virtual router's GW IP on external network side. dnsmasq iptablesで NAT接続 br-int br-priv VLAN ID mapping for virtual L2 switches is done with OpenFlow eth2 VM1 Compute node VM2 br-int br-priv Virtual router's GW IP on private network side. eth1 VLAN Trunk VLAN ID mapping for virtual L2 switches is done with OpenFlow 62 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 63. The overall picture of Open vSwitch plugin (2) ■ While an end-user defines the virtual network components such as virtual L2 switches and virtual routers, the agents work in the following way. - When a virtual L2 switch is defined, L2 Agent configures the VLAN ID mapping on "br-int" and "br-priv" so that compute nodes are connected each other via VLAN. At the same time, DHCP Agent starts a new dnsmasq which provides the DHCP function to the corresponding VLAN. - When a virtual router is defined and connected to the external network, L3 Agent creates a port on "br-ex" which works as an external gateway of the virtual router. - When a virtual L2 switch is connected to the virtual router, L3 Agent creates a port on "brex" which works as an internal gateway of the virtual router. It also configures iptables to start NAT connection between public and private networks. ■ In addition to the agents which have already been explained, there exists "Metadata Proxy Agent" which helps the metadata mechanism to work. - iptalbes on network node is configured so that packets to "169.254.169.254:80" are redirected to Metadata Proxy Agent. This agent determines the instance which sent the packet from the source IP address, and send back the corresponding message including the requested metadata. 63 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 64. Packet redirection to Metadata Proxy Agent ■ The following commands show iptables configuration within the namespace which contains the virtual router. There is a redirection entry where packets to "169.254.169.254:80" are redirected to Metadata Proxy Agent on the same node. # ip netns list qrouter-b35f6433-c3e7-489a-b505-c3be5606a643 qdhcp-1a4f4b41-3fbb-48a6-bb12-9621077a4f92 qrouter-86654720-d4ff-41eb-89db-aaabd4b13a35 qdhcp-f8422fc9-dbf8-4606-b798-af10bb389708 Nemespace containing the virtual router # ip netns exec qrouter-b35f6433-c3e7-489a-b505-c3be5606a643 iptables -t nat -L ... Chain quantum-l3-agent-PREROUTING (1 references) target prot opt source destination REDIRECT tcp -- anywhere 169.254.169.254 tcp dpt:http redir ports 9697 ... # ps -ef | grep 9697 root 63055 1 0 7月09 ? 00:00:00 python /bin/quantum-ns-metadata-proxy --pid_file=/var/lib/quantum/external/pids/b35f6433-c3e7-489a-b505-c3be5606a643.pid --router_id=b35f6433-c3e7-489a-b505-c3be5606a643 --state_path=/var/lib/quantum --metadata_port=9697 --verbose --log-file=quantum-ns-metadata-proxyb35f6433-c3e7-489a-b505c3be5606a643.log --log-dir=/var/log/quantum ■ Note that "NOZEROCONF=yes" should be set in "/etc/sysconfig/network" of guest OS when using the metadata mechanism. - Without it, packets to "169.254.0.0/16" are not routed to outside the guest OS due to APIPA specification. 64 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 65. Configuration steps of virtual network 65 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 66. Configuration steps of virtual network (1) ■ The following is the steps for configuring virtual network with quantum command. - We use the following environment variables as parameters specific to each setup. public="192.168.199.0/24" gateway="192.168.199.1" nameserver="192.168.199.1" pool=("192.168.199.100" "192.168.199.199") - Define an external network "ext-network". tenant=$(keystone tenant-list | awk '/ services / {print $2}') quantum net-create --tenant-id $tenant ext-network --shared --provider:network_type flat --provider:physical_network physnet1 --router:external=True ● ● ● ● Since the external network is shared by multiple tenants, the owner tenant (--tenant-id) is "services" (a general tenant for shared services), and "--shared" option is added. As we suppose there's no VLANs in the external network, network_type is "flat". In the plugin configuration file (plugin.ini), Open vSwitch for the external network connection (br-ex) has an alias "physnet1" which is specified as physical_network here. "--router:external=True" is specified to allow to be a default gateway of virtual routers. 66 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 67. Configuration steps of virtual network (2) - Define a subnet of the external network. quantum subnet-create --tenant-id $tenant --gateway ${gateway} --disable-dhcp --allocation-pool start=${pool[0]},end=${pool[1]} ext-network ${public} ● "--allocation-pool" specifies the IP address pool (the range of IP addresses which can be used by OpenStack as router ports and floating IP, etc.) - Define a virtual router "demo_router" for the tenant "demo", and attach it to the external network. tenant=$(keystone tenant-list|awk '/ demo / {print $2}') quantum router-create --tenant-id $tenant demo_router quantum router-gateway-set demo_router ext-network ● The owner tenant (--tenant-id) is "demo". Alias setting for Open vSwitch in plugin configuration file (/etc/quantum/plugin.ini). bridge_mappings=physnet1:br-ex,physnet2:br-priv tenant_network_type=vlan network_vlan_ranges=physnet1,physnet2:100:199 Mapping between alias and actual Open vSwitch name VLAN ID range for each Open vSwitch. (VLAN is not used for physnet1.) 67 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 68. Configuration steps of virtual network (3) - Define a virtual L2 siwtch "private01". quantum net-create --tenant-id $tenant private01 --provider:network_type vlan --provider:physical_network physnet2 --provider:segmentation_id 101 ● ● Since VLAN is used as a separation mechanism of private networks, "vlan" is specified for network_type . VLAN ID is specified with segmentation_id. In the plugin configuration file (plugin.ini), Open vSwitch for the private network connection (br-priv) has an alias "physnet2" which is specified as physical_network here. - Define a subnet of "private01", and connect it to the virtual router. quantum subnet-create --tenant-id $tenant --name private01-subnet --dns-nameserver ${nameserver} private01 192.168.1.101/24 quantum router-interface-add demo_router private01-subnet ● "192.168.1.101/24" is specified for the subnet as an example here. 68 Copyright (C) 2014 National Institute of Informatics, All rights reserved.
  • 69. 69 Copyright (C) 2014 National Institute of Informatics, All rights reserved.