SlideShare ist ein Scribd-Unternehmen logo
1 von 91
Downloaden Sie, um offline zu lesen
R E D H AT O P E N STAC K
P L AT F O R M OV E RV I E W
I N N OVAT I O N S D E V E LO P M E N T L A B
D M Y T R O H A N Z H E LO
R E D H AT ACC R E D I T E D P R O F E S S I O N A L
R E D H AT S A L E S E N G I N E E R S P E C I A L I ST - C LO U D I N F R A ST R U C T U R E
G E N E R A L OV E RV I E W
• The OpenStack project is an open source cloud computing platform that
supports all types of cloud environments. The project aims for simple
implementation, massive scalability, and a rich set of features. Cloud
computing experts from around the world contribute to the project.
• OpenStack provides an Infrastructure-as-a-Service (IaaS) solution
through a variety of complementary services. Each service offers an
Application Programming Interface (API) that facilitates this integration.
• This guide covers step-by-step deployment of the major OpenStack
services using a functional example architecture suitable for new users of
OpenStack with sufficient Linux experience. This guide is not intended to
be used for production system installations, but to create a minimum
proof-of-concept for the purpose of learning about OpenStack.
3
O P E N STAC K M A P
4
O P E N STAC K CO M P O N E N T S
• Compute (Nova)
• Networking (Neutron)
• Block Storage (Cinder)
• Object Storage (Swift)
• VM Image Storage (Glance)
• Identity and Access Control
(Keystone)
5
• Orchestration Engine (Heat)
• Telemetry (Ceilometer)
• Bare Metal for Tenants (Ironic)
• Dashboard (Horizon)
• Data Processing (Sahara)
• Deployment and Management
(Director)
OV E RV I E W
OpenStack Connects Two Worlds
6
OV E RV I E W
• Tenants are actual IaaS cloud users
• Consume services enabled in the cloud
• See only their own and shared cloud resources
• Isolated from other tenants
• Do not have a view to the cloud infrastructure
• Operators are users with special privileges
• Often the same role that has root access to the systems
• Configure, monitor, and maintain OpenStack cloud for tenants
• Aware about cloud infrastructure and external network and storage
environment
7
OV E RV I E W
Cloud Interfaces
• Tenants and operators use the same interfaces:
• OpenStack Dashboard (Horizon)
• CLI tools
• REST APIs
• Libraries (such as os_cloud in Ansible or boto in Python)
• OpenStack policy engine filters which API calls require
administrative privileges
8
CO M P U T E ( N OVA )
9
T E N A N T V I E W O P E R ATO R V I E W
• I need VMs, anytime
• How many can I
have?
• It must be secure
• SSH and VNC,
please
• I have hardware
capacity available
• This is how you
consume it
• I set usage quotas
• I design for
performance and
scalability
CO M P U T E ( N OVA )
10
T E N A N T V I E W O P E R ATO R V I E W
• Similar to Amazon EC2
• Self-service VMs: Boot an instance of a selected
flavor (vCPU, RAM, disk size), OS image (from
Glance), SSH key pair, host aggregate or availability
zone (AZ), custom metadata, user data, security
groups, with/without ephemeral disk
• Reboot, stop, resize, terminate
• See the console log of this instance, open VNC/
SPICE session, change VM root password (if OS
supports)
• Reserve, assign, and release floating IPs
• Manage key pairs and security groups
• Check quota usage
• Select which Neutron network or port
• Other Neutron/Cinder shortcuts for network and
volume management
• No need to manage hypervisors individually, due to
distributed design of OpenStack, at any scale
• Supports KVM and VMware (vCenter)
• Defines which choices are available to tenants:
flavors offering specific capabilities and carefully
planned capacity and overcommit ratios
• Easier maintenance and operations with support for
node evacuation, mark “host down,” and instance
live-migration
• Define host aggregates and AZs with specific
metadata to allow advanced scheduling and
request filtering
• Set NFV-specific flavors, including vCPU pinning,
large pages, vCPU, RAM, and I/O device NUMA
awareness, SR-IOV/PCI passthrough
• Instance HA, transparent to tenants, if enabled
CO M P U T E ( N OVA )
11
N E T WO R K I N G ( N E U T R O N )
12
T E N A N T V I E W O P E R ATO R V I E W
• I need my own
network, isolated
from others
• Some private IPs,
some public IPs
• These are my QoS
specs
• Let me share
networks with
others
• I design a network
overlay and provide
external access
• I have very few
public IPs
• I set rules, policies,
quotas
• With SDN, I can
centrally manage
and monitor it all
N E T WO R K I N G ( N E U T R O N )
13
T E N A N T V I E W O P E R ATO R V I E W
• Similar to Amazon VPC, ELB
• Create, Report, Update, Delete (CRUD) networks,
subnets and ports, for basic L2 and L3 with IP
address management (DHCP)
• Define a tenant network (overlay)
• Additionally:
• Provider networks
• Quotas
• Security groups (per port)
• East/West L3 routing with tenant-defined
routers
• External gateway, NAT, floating IPs
• Load balancing, VPN and firewall
• IPv6 tenant network management
• QoS (rate limit policies) per port, per network
• RBAC for granular sharing of tenant networks
• Defines provider networks, manually set up in
Neutron by the operator, representing pre-existing
networks (i.e. VLAN), useful for pointing to
corporate DNS or gateways with multiple routes
• Multiple simultaneous L2 technologies on a single
installation via ML2
• Default Open vSwitch, or choose from dozens of
commercial SDN vendors
• Configures SSL/TLS back end for LBaaS
• Define floating IP ranges, normally for publicly
routable IPv4 addresses
• Offer/delegate IPv6 tenant networks (SLAAC,
DHCP)
• Define and enforce QoS (currently only egress
flows)
• VXLAN offloading to HW available (up to 4x
throughput)
• Distributed Virtual Routing (DVR) for better
scalability
• L2Pop and responder to mitigate ARP flooding at
scale
N E T WO R K I N G ( N E U T R O N )
14
B LO C K STO R AG E ( C I N D E R )
15
T E N A N T V I E W O P E R ATO R V I E W
• Too much data in
my VMs!
• I need permanent
storage
• Can I snapshot,
back up, and roll
back?
• Encrypted, please
• I constantly buy
storage
• I must allocate
space to tenants
• I can combine
different tiers of
technologies (NAS,
SAN)
• I set rules, policies,
quotas
B LO C K STO R AG E ( C I N D E R )
16
T E N A N T V I E W O P E R ATO R V I E W
• Similar to Amazon EBS
• CRUD operations with block devices
• Add hard drives to an instance
• Persistent storage, can be cloned, snapshotted,
replicated, or imported/exported to another AZ
(also public storage like Google Cloud Storage)
• Encryption available via LUKS (if enabled by Ops)
• Hot-unplug from one instance and re-attach to
another instance
• Non-disruptive and incremental snapshot: ideal for
backup/restore and DR use cases
• QoS available (total IOPS)
• If exposed, vendor-specific features (mirroring,
compression, replication, and thin provisioning)
• Uses Red Hat Ceph storage by default
• Multiple back ends (LVM, iSCSI, NFS, ScaleIO, etc.)
including proprietary ones with more specific
features
• Faster provisioning via oversubscription, thin
provisioning, and generic image cache
• Simplified operations, DR and backup with generic
volume migration and replication (sync/async, with
N number of replicas) between different storage
back ends
• Private volume types for premium levels of service
(SSD, thick provisioned)
• iSCSI multi-path support for extra reliability
B LO C K STO R AG E ( C I N D E R )
17
O B J E C T STO R AG E ( S W I F T )
18
T E N A N T V I E W O P E R ATO R V I E W
• My application
needs object
storage (files,
media)
• I can use HTTP(s)
• Stateless, please!
No time for
mounting file
systems
• I offer a private S3-
like experience
• I must scale without
limits
• I want advanced
features
O B J E C T STO R AG E ( S W I F T )
19
T E N A N T V I E W O P E R ATO R V I E W
• Similar to Amazon S3 (a modern version
of FTP, WebDAV)
• CRUD objects in containers, per account
• Ideal for storing static objects (media,
web files, email)
• Only useful if the application understands
the Swift/S3 API
• Also useful for storing Glance image
backups
• Not meant to be used as POSIX file
system
• Fast-POST allows fast, efficient updates of
metadata without uploading content
again
• Very few dependencies with other
OpenStack modules, mostly Keystone for
RBAC
• Scales horizontally up to petabytes
• Replication for global clusters
• Advanced Swift features: middleware for
API processing, temporary URLs, URL
rewrite
• Swift requires its own storage space, not
integrated with Ceph
• Reduced availability for further storage
efficiency with erasure coding
O B J E C T STO R AG E ( S W I F T )
20
V M I M AG E STO R AG E ( G L A N C E )
21
T E N A N T V I E W O P E R ATO R V I E W
• Which operating
systems can I use?
• This is my own
version, store it just
for me
• Is the OS image
genuine?
• Take this VMware
template and
import it
• Only approved OS
can be used in my
cloud
• Centrally offer
updated OS
• Leverage storage
integration to
reduce network
usage
V M I M AG E STO R AG E ( G L A N C E )
22
T E N A N T V I E W O P E R ATO R V I E W
• Similar to Amazon AMIs
• CRUD images (VM templates, a bootable
OS) and snapshots (VM backup)
• Private or public images
• Upload from file or from URL
• Metadata can host any key-value pair,
useful for documenting OS version, date,
etc.
• Multiple disk formats (QCOW2, RAW, ISO,
VDI, VMDK) and container formats (bare,
OVF, AMI, ARI)
• Checksum and signature verification for
extra security
• Best practice: Offer “golden images” to
tenants via public Glance images
• Store images using Cinder as back end
• If not using Ceph, director configures
Swift as a Glance image store
• If using Ceph, Glance leverages advanced
RBD features (cache, thin provisioning,
immediate snapshot)
• Automatic Nova/libvirt/KVM optimization
depending on guest OS via os_name
attribute
V M I M AG E STO R AG E ( G L A N C E )
23
I D E N T I TY A N D ACC E S S CO N T R O L
( K E Y STO N E )
24
T E N A N T V I E W O P E R ATO R V I E W
• I am not a hacker,
believe me!
• My boss just gave
me permission to
ask for VMs
• Where are all the
services?
• I am a project lead, I
must be admin of
my project
• Who are you?
• Let me validate with
LDAP
• I must integrate with
my company’s SSO
• I must secure entry
points with TLS
certificates
I D E N T I TY A N D ACC E S S CO N T R O L
( K E Y STO N E )
25
T E N A N T V I E W O P E R ATO R V I E W
• Similar to Amazon IAM
• Authenticates and gives authorization to
users. Provides session tokens that are
used for all OpenStack actions
• CRUD user, tenants (project), roles (as
long as Operator allows it)
• Change password, also download
credentials file (RC) with EC2 keys
• Discover OpenStack endpoints via
catalog
• Kerberos for SSO in both Web (Horizon)
and in CLI on client systems with SSSD
• Federated Identity: Same user/password
across multiple OpenStack providers
• CRUD user, tenants (project), roles, and
domains (for v3) for better RBAC
• SAML Federation for authentication with
external providers (pre-existing) or other
clouds, via Red Hat SSO
• Multiple identity back ends: LDAP,
ActiveDirectory, FreeIPA, PAM, etc.
• Preferred authorization back end is
MariaDB
• Lightweight tokens (Fernet) for better
performance and scalability
• Logs in standard CADF auditable format
• Public endpoint protection with SSL/TLS
I D E N T I TY A N D ACC E S S CO N T R O L
( K E Y STO N E )
26
O R C H E ST R AT I O N E N G I N E ( H E AT )
27
T E N A N T V I E W O P E R ATO R V I E W
• This is the blueprint
of my application
deployment:
dependencies,
config, etc.
• Can you run this for
me?
• Scale it out when
this threshold is
reached
• To compete with
public clouds, I
should offer an
orchestration
engine
• Auto-scaling, load
balancers, and
quotas allow me to
monitor and predict
demand
O R C H E ST R AT I O N E N G I N E ( H E AT )
28
T E N A N T V I E W O P E R ATO R V I E W
• Similar to Amazon CloudFormation
• CRUD templates (stacks) that can be
stopped and resumed
• Instructs OpenStack to automate
deployment of resources as defined in
HOT or CloudFormation (CFN) languages
• Well-defined and mature, HOT offers
more modularity and flexibility
improvements (i.e., resource chains, pre-
delete hooks, etc.)
• Very useful when combined with
Ceilometer and LBaaS. Example use case
is instance auto-scaling by creating
another VM when cluster load reaches
80% CPU
• Heat may require minor tuning to ensure
enough CPU and RAM are assigned to it
• Can offer shared templates, approved by
IT
• Excellent integration with CloudForms to
create an advanced service catalog to
end users, with policies and customized
quota and capacity management
O R C H E ST R AT I O N E N G I N E ( H E AT )
29
T E L E M E T R Y ( C E I LO M E T E R )
30
T E N A N T V I E W O P E R ATO R V I E W
• How much CPU,
RAM, disk am I
using per hour?
• Notify me of any
alarms here
• I wish I could charge
back/show back
how much every
user is consuming
• This is useful for my
own internal usage!
T E L E M E T R Y ( C E I LO M E T E R )
31
T E N A N T V I E W O P E R ATO R V I E W
• Similar to Amazon CloudWatch
• Metrics (CPU, RAM usage) and events
(e.g., instance is created) can be listed
• Alarms (e.g., CPU threshold reached) can
also be triggered. Alarm thresholds can
be custom defined, all via the Aodh API
(pronounced “hey”)
• Querying for historical values is available
• Historically, Ceilometer required tuning at
scale to allow for tenants polling historical
values. MongoDB was the only back end
• Now Ceilometer offers much better
performance and scalability thanks to the
split of its components
• Gnocchi stores/indexes time-series
metrics
• Aodh does the same for alarms
• Panko is the event engine
• Connects with CloudForms for capacity
monitoring and management
T E L E M E T R Y ( C E I LO M E T E R )
32
B A R E M E TA L F O R T E N A N T S ( I R O N I C )
33
T E N A N T V I E W O P E R ATO R V I E W
• I need a physical
VM for a while, with
a generic OS
• I do not have many
security or isolation
concerns, nor
network protection
needs
• I have some spare
nodes in a separate
cluster, with a
shared network
• I will offer them to
trusted users
groups
• I will provide the OS
image
B A R E M E TA L F O R T E N A N T S ( I R O N I C )
34
T E N A N T V I E W O P E R ATO R V I E W
• Similar to Amazon Dedicated EC2 Servers
• Nova commands are used against an
existing bare-metal Host Aggregate
• After Ironic reserves a bare-metal node,
Nova is used to provision the instance
• Only works with Glance images tagged
hypervisor_type=ironic
• Can deploy Linux or Windows machines
(requires extra steps)
• Allocates a pool of nodes to be entirely
allocated to certain tenants, on demand
• Requires careful design for tenant-facing
service (network isolation, security, etc.)
• Defines Nova Host Aggregates with key-
value baremetal and a flavor with key
hypervisor_type="ironic"
• Quotas and capacity planning are needed
• Good integration with most hardware
vendors: Dell, Cisco, HP, etc.
• Introspection process to detect HW
capabilities
• Requires many Nova and Neutron
changes (i.e. Flat Networking for PXE
provisioning)
B A R E M E TA L F O R T E N A N T S ( I R O N I C )
35
DA S H B OA R D ( H O R I ZO N )
36
T E N A N T V I E W O P E R ATO R V I E W
• I need a UI to
manage my
workloads or
troubleshoot
• I do not like the CLI
• I want to see my
Heat topologies
• Quickly display my
quota usage and
default options
• I want an admin
panel
• I want quick access
to my Red Hat
Insights account
• I want to see all
Neutron networks
and routers
DA S H B OA R D ( H O R I ZO N )
37
DATA P R O C E S S I N G ( S A H A R A )
38
T E N A N T V I E W O P E R ATO R V I E W
• I need a Hadoop
cluster for a few
hours
• I need to try
different big data
platforms
• I want my clusters to
scale automatically
• I do not have the
manpower to
customize big data
platforms to all of
my tenants
• I will get third-party
providers and
deliver their stacks
as a service
DATA P R O C E S S I N G ( S A H A R A )
39
T E N A N T V I E W O P E R ATO R V I E W
• Similar to Amazon Elastic MapReduce
(EMR)
• Run Hadoop workloads in a few clicks
without expertise in Hadoop operations
• Simple parameters such as Hadoop
version, cluster topology, and node count
• Data can be hosted elsewhere (S3, Swift,
etc.)
• Rapid provisioning of Hadoop clusters for
Dev and QA
• “Analytics-as-a-Service" for bursty or ad
hoc workloads
• Utilization of unused compute power
from a general-purpose OpenStack cloud
to perform data processing tasks
• Supports Hadoop distributions on
CentOS and Red Hat Enterprise Linux 7:
• Cloudera
• HortonWorks
• Ambari
• MapR
• Plugin Image Packaging Tool to validate
custom plug-ins, package them, and
generate clusters from clean, versioned,
OS-only images
DATA P R O C E S S I N G ( S A H A R A )
40
S H A R E D F I L E SY ST E M ( M A N I L A )
41
T E N A N T V I E W O P E R ATO R V I E W
• I need a network
folder to share files
between VMs
• Sometimes I share it
with other users in
my team
• I do not want to
manage the folder
(permissions,
quotas)
• I do not have the
time to create
temporary shares
and enable network
security
• I wish I could
automatically
leverage OpenStack
users and groups
S H A R E D F I L E SY ST E M ( M A N I L A )
42
T E N A N T V I E W O P E R ATO R V I E W
• Similar to Amazon Elastic File System but
not just NFS, also CIFS
• Creates a network file share, available in a
Neutron shared network
• Can be shared with other tenants (RBAC),
including mappings to LDAP entities
• User-defined quotas, policies, replication,
snapshots, and extend/shrink capacity
• VM operating system must connect to the
share using whatever network protocol
has been set (NFS, CIFS)
• Significantly reduces operational burden
• Delegates storage management to end
users with clearly defined limits and
boundaries
• NFS (access by IP address or subnet)
• CIFS (authentication by user)
• In OpenStack Platform, Manila is GA,
deployed via director
• Only NetApp driver is GA
• CephFS driver is Tech Preview
S H A R E D F I L E SY ST E M ( M A N I L A )
43
D I R E C TO R
R E D H AT O P E N STAC K P L AT F O R M
D I R E C TO R
• Leverages best practices and reference architectures from extensive field
experience
• Safely upgrade and update production OpenStack deployments
• Out-of-the-box Control Plane HA thanks to Pacemaker
• External load balancer support
• Ceph deployment and configuration as storage back end
• Can connect to existing Ceph
• API-driven deployment (and management) of Red Hat OpenStack Platform
• Allows CloudForms integration
45
R E D H AT O P E N STAC K P L AT F O R M
D I R E C TO R
• Supported partner hardware integration (Ironic, Cinder, Neutron,
Manila)
• Cisco UCS, Dell, Intel, HP, Fujitsu, SeaMicro, and Open
CloudServer
• Cisco Nexus 1000v (networking) and other SDNs
• Netapp Data ONTAP (Cinder, Manila storage) and other storage
• Configuration stored as YAML code (VLAN, IP ranges)
• Director CLI unified with standard OpenStack interfaces
46
D I R E C TO R
Building Scalable Clouds
• Scales to hundreds of nodes, automating entire hardware life cycle
• Ready state configuration for selected hardware that automatically configures RAID,
BIOS, network bonding, etc.
• Pattern-based automatic discovery and selection of appropriate nodes from hardware
inventory
• Automated Health Check can execute performance test before deployment to
identify possible misconfigurations or faulty servers
• Ability to validate installation post deployment using Tempest
• Easy to scale up and down—add compute and storage capacity (see deployment limits)
• Enhanced management via CloudForms for both tenants and administrators
47
T R I P L E O
OpenStack on OpenStack
• Director is based on upstream OpenStack deployment project TripleO
• Operator uses a specialized OpenStack installation: undercloud
• Undercloud deploys and updates production OpenStack installation: overcloud
48
D I R E C TO R T R I P L E O
49
Life Cycle
D I R E C TO R G R A P H I CA L U S E R
I N T E R FAC E ( U N D E R C LO U D )
50
D I R E C TO R VA L I DAT I O N S
( U N D E R C LO U D )
• Ansible-driven solution to catch potential hardware, networking, and
deployment issues to reduce deployment failures
• Simplifies the burden on IT staff by providing recommended configuration
solution settings when issues are detected
• Helps customers to achieve production-ready deployments through entire
process
• Pre-installation (prior to starting deployment)
• Post-installation (checks after deployment)
• Upstream project: tripleo-validations
51
D E FAU LT R O L E S ( R E F. A R C H )
• Five default roles:
• Controller
• Compute
• BlockStorage (Cinder)
• CephStorage
• ObjectStorage (Swift)
• Operators can easily customize
and create their own roles
• Further tuning available as post-
installation scripts
52
CO M P O S A B L E R O L E S A N D 

C U STO M S E RV I C E S
• Distribute services specific to your data center and architecture
requirements to individual node or group of nodes
53
CO N T R O L P L A N E H I G H AVA I L A B I L I TY
54
N F V I N STA L L AT I O N S W I T H D I R E C TO R
• Director allows operator to define advanced resource partitioning at deploy
time with control of NUMA/CPU pinning, Hugepages, IRQ isolation, SR-IOV,
and OVS+DPDK all via composable roles
55
S R - I OV D E P LOY M E N T
( D P D K G U E ST )
OV S + D P D K D E P LOY M E N T
( D P D K G U E ST )
I N T E G R AT I O N S
R E D H AT E N T E R P R I S E L I N U X
57
C E RT I F I E D PA RT N E R P LU G - I N S
58
S O F T WA R E - D E F I N E D N E T WO R K I N G
• Dozens of software-defined networking (SDN) partners, Neutron certified
• Director can automatically configure Cisco, Nuage, PLUMgrid
• More to come
• Two main networking models:
• Software-centric; uses general-purpose hardware
• Hardware-centric; requires specific network hardware
• Can extend Neutron via ML2 drivers, core plug-ins, or advanced services
59
R E D H AT C LO U D F O R M S
• Two complementary options
• OpenStack workload management
• OpenStack infrastructure management
• OpenStack workload management
• Tenant- and operator-facing
• Self-service console
• Orchestration
• OpenStack infrastructure management
• Operator-facing
• Deployment details, service monitoring, drift history
• Scaling
60
R E D H AT C E P H STO R AG E
• Red Hat Ceph Storage is included with Red Hat OpenStack Platform
• Support for up to 64TB
• Red Hat OpenStack Platform director supports Ceph Storage deployment
• Ceph is default back end for OpenStack services
• Ceph cluster can be installed and updated by director
• Overcloud can be connected to externally managed Ceph cluster
• Ceph RADOS Object Gateway can be enabled
61
P E R F O R M A N C E M O N I TO R I N G
62
AVA I L A B I L I TY M O N I TO R I N G
63
C E N T R A L I Z E D LO G G I N G
64
R E D H AT I D E N T I TY M A N AG E M E N T
65
R E D H AT S AT E L L I T E
• Advanced Management of Node Content
• Subscription management
• Review of content (packages) on nodes
• New content notification, errata overview
• Manage which packages are available to nodes
66
O P E N S H I F T CO N TA I N E R P L AT F O R M
67
A N S I B L E E N G I N E A N D 

A N S I B L E TO W E R
68
R E D H AT C LO U D S U I T E
69
H Y P E RV I S O R S
C H O O S I N G A H Y P E RV I S O R
A hypervisor provides software to manage virtual machine access to the underlying
hardware. The hypervisor creates, manages, and monitors virtual machines. OpenStack
Compute (nova) supports many hypervisors to various degrees, including:
• Ironic
• KVM
• LXC
• QEMU
• VMware ESX/ESXi
• Xen (using libvirt)
• XenServer
• Hyper-V
• PowerVM
• UML
• Virtuozzo
• zVM
71
H A R D WA R E
H A R D WA R E R E Q U I R E M E N T S
73
H A R D WA R E R E CO M M E N DAT I O N S
IOPS-OPTIMIZED SOLUTIONS
With the growing use of flash storage, IOPS-intensive workloads are increasingly being hosted on
Ceph clusters to let organizations emulate high-performance public cloud solutions with private
cloud storage. These workloads commonly involve structured data from MySQL-, MariaDB-, or
PostgreSQL-based applications. OSDs are typically hosted on NVMe SSDs with co-located Ceph
write journals. Typical servers are listed in Table 2, and include the following elements:
• CPU. 10 cores per NVMe SSD, assuming a 2 GHz CPU.
• RAM. 16GB baseline, plus 2GB per OSD.
• Networking. 10 Gigabit Ethernet (GbE) per 12 OSDs (each for client- and cluster-facing
networks).
• OSD media. High-performance, high-endurance enterprise NVMe SSDs.
• OSDs. Four per NVMe SSD.
• Journal media. High-performance, high-endurance enterprise NVMe SSD, co-located with OSDs.
• Controller. Native PCIe bus.
74
H A R D WA R E R E CO M M E N DAT I O N S
THROUGHPUT-OPTIMIZED SOLUTIONS
Throughput-optimized Ceph solutions are usually centered around semi-structured or unstructured
data. Large-block sequential I/O is typical. Storage media on OSD hosts is commonly HDDs with
write journals on SSD-based volumes. Typical server elements include:
• CPU. 0.5 cores per HDD, assuming a 2 GHz CPU.
• RAM. 16GB baseline, plus 2GB per OSD.
• Networking. 10 Gigabit Ethernet (GbE) per 12 OSDs (each for client- and cluster-facing
networks).
• OSD media. 7,200 RPM enterprise HDDs.
• OSDs. One per HDD.
• Journal media. High-endurance, high-performance enterprise serial-attached SCSI (SAS) or
NVMe SSDs.
• OSD-to-journal ratio. 4-5:1 for an SSD journal, or 12-18:1 for an NVMe journal.
• Host bus adapter (HBA). just a bunch of disks (JBOD).
75
H A R D WA R E R E CO M M E N DAT I O N S
COST/CAPACITY-OPTIMIZED SOLUTIONS
Cost/capacity-optimized solutions typically focus on higher capacity, or longer archival
scenarios. Data can be either semi-structured or unstructured. Workloads include media
archives, big data analytics archives, and machine image backups. Large-block sequential I/
O is typical. For greater cost effectiveness, OSDs are usually hosted on HDDs with Ceph write
journals co-located on the HDDs. Solutions typically include the following elements:
• CPU. 0.5 cores per HDD, assuming a 2 GHz CPU.
• RAM. 16GB baseline, plus 2GB per OSD.
• Networking. 10 GbE per 12 OSDs (each for client- and cluster-facing networks).
• OSD media. 7,200 RPM enterprise HDDs.
• OSDs. One per HDD.
• Journal media. Co-located on the HDD.
• HBA. JBOD.
76
N E T WO R K I N G O P T I O N 1 : 

P R OV I D E R N E T WO R K S
77
N E T WO R K I N G O P T I O N 2 : 

S E L F-S E RV I C E N E T WO R K
78
N E T WO R K L AYO U T
• The example architectures assume use of the following networks:
• Managed on 10.0.0.0/24 with gateway 10.0.0.1

This network requires a gateway to provide Internet access to all nodes for administrative purposes such as
package installation, security updates, DNS, and NTP.
• Provider on 203.0.113.0/24 with gateway 203.0.113.1

This network requires a gateway to provide Internet access to all instances in your OpenStack environment.
79
S U B S C R I P T I O N
R E D H AT O P E N STAC K P L AT F O R M
O F F E R I N G S
Red Hat OpenStack Platform subscriptions come in two versions:
1. Red Hat OpenStack Platform
2. Red Hat OpenStack Platform (without guest operating system)
The only difference between the two is that the first version includes the right to
use Red Hat Enterprise Linux® as the guest operating system in an unlimited
number of virtual machines hosted by OpenStack. Both versions include the
ability to run Red Hat OpenStack Platform.
A Red Hat OpenStack Platform subscription allows you to install and run the
included software on a single server with up to two populated sockets. If the
server has more than two sockets, you can stack additional subscriptions on it
until the number of allowed sockets is equal or greater than the number of
populated sockets in the server.
81
A S S E M B L I N G YO U R S U B S C R I P T I O N
O R D E R
To determine the Red Hat OpenStack Platform subscription needed for each
server in a private cloud deployment, look at the role the server will perform.
The Red Hat OpenStack Platform deployment model includes two main
concepts: The undercloud and the overcloud.
82
U N D E R C LO U D
The undercloud installs, configures, and manages the overcloud.
Typically, a single server is assigned the role of being the
undercloud. The best practice is to install the following software
components in virtual machines on the undercloud server:
• Red Hat OpenStack Platform director
• Red Hat CloudForms
Since this server uses Red Hat OpenStack Platform and will run
virtual machines using Red Hat Enterprise Linux as the guest
operating system, a Red Hat OpenStack Platform subscription
should be purchased.
83
OV E R C LO U D
The overcloud has all the components needed to run your private
cloud. The servers that host the overcloud are usually assigned one of
the following three roles:
• Controller: Nodes that provide administration, networking, and high
availability for the OpenStack environment.
• Compute: Nodes that provide computing resources for the
OpenStack environment.
• Storage: Nodes that provide storage for the OpenStack
environment.
Each role has different subscription considerations
84
CO N T R O L L E R
For controller nodes, consider whether or not you will deploy
any virtual machines on this server. If you will not deploy any
virtual machines (the most likely use case), or if any virtual
machines you do deploy on this server will not use Red Hat
Enterprise Linux as the guest operating system, then you
should purchase Red Hat OpenStack Platform (without guest
OS) for that server.
If you will deploy virtual machines on the controller node and
you will use Red Hat Enterprise Linux as the guest operating
system in those virtual machines, then you should purchase
Red Hat OpenStack Platform for that server.
85
CO M P U T E
For compute nodes, consider whether or not you want to use
Red Hat Enterprise Linux as the guest operating system in any of
the virtual machines hosted on these servers. If you will use Red
Hat Enterprise Linux as the guest operating system, then you
should purchase Red Hat OpenStack Platform for that server.
If you will use another operating system, such as Windows, as
the guest operating system, or if you will use standalone Red Hat
Enterprise Linux Server or Red Hat Enterprise Linux for Virtual
Datacenters subscriptions for the guest operating system, you
should purchase Red Hat OpenStack Platform (without guest
OS) for that server.
86
STO R AG E
For storage nodes, consider what type of storage will be used:
• Ceph storage nodes: Purchase Red Hat Ceph Storage
subscriptions for these servers.
• Block storage (Cinder) nodes: Purchase Red Hat OpenStack
Platform (without guest operating system) subscriptions for
these servers.
• Object storage (Swift) nodes: Purchase Red Hat OpenStack
Platform (without guest operating system) subscriptions for
these servers.
87
R E D H AT C LO U D F O R M S
A version of Red Hat CloudForms is included with each Red Hat OpenStack
Platform subscription. It is intended to be used as the day-two cloud
management tool for Red Hat OpenStack Platform.
It includes the complete feature set of Red Hat’s standalone CloudForms
offering. However, it can only be used to manage virtual machines that are
hosted by Red Hat OpenStack Platform. It cannot be used with any other
virtualization platform.
As an example, take a server using Red Hat OpenStack Platform to create and
run virtual machines. The included Red Hat CloudForms can manage all the
virtual machines hosted on that server.
However, if the private cloud includes a mix of compute servers using Red Hat
OpenStack Platform, VMware vSphere, and virtual machines hosted on Amazon
EC2, the included Red Hat CloudForms subscription can only be used to
manage the virtual machines being hosted on Red Hat OpenStack Platform.
88
R E D H AT C E P H STO R AG E
Red Hat OpenStack Platform and Red Hat Cloud Infrastructure subscriptions
include enablement software that is needed to use Red Hat Ceph Storage with
Red Hat OpenStack Platform. This enablement software includes the installation,
management, and monitoring tools for Ceph.
However, Red Hat Ceph Storage software needed for the storage nodes is not
included. That software component is called Red Hat Ceph Storage object
storage daemon (OSD). It is the OSD for the Ceph distributed file system and is
responsible for storing objects on a local file system and providing access to them
over the network. This software component is only available in the Red Hat Ceph
Storage SKUs.
To expand your Red Hat Ceph Storage capability into production, you can buy
any Red Hat Ceph Storage subscription, which start at 256TB. For more
information about Red Hat’s Ceph Storage solutions, visit https://redhat.com/en/
technologies/storage/ceph.
89
L I F E - CYC L E O P T I O N S
With the release of Red Hat OpenStack Platform version 10, a change was made to the
life-cycle periods, based on feedback from customers. It balances the needs of customers
who want access to the latest OpenStack technology as soon as it becomes available and
those who want to standardize on one version for the longest possible period.
To meet those needs, the life cycle for Red Hat OpenStack Platform will no longer be
three years for every major new release. Instead, you can choose either a one-year
(standard) or three-year (long-life) life cycle. With the three-year long-life version, you will
also have the option to purchase extended life-cycle support (ELS) for up to two
additional years. The life-cycle periods for version 10 and beyond are:
• Version 10 (based on upstream OpenStack community version “Newton”) — three
years (with the option to purchase up to two additional years).
• Version 11 (based on upstream version “Ocata”) — one year.
• Version 12 (based on upstream “P” version) — one year.
• Version 13 (based on upstream “Q” version) — three years (with the option to
purchase up to two additional years).
90
Innovations Development Lab
https://indevlab.com
info@indevlab.com
T H A N K YO U F O R AT T E N T I O N !
Dmytro Hanzhelo
R E D H AT ACC R E D I T E D P R O F E S S I O N A L
R E D H AT S A L E S E N G I N E E R S P E C I A L I ST - C LO U D I N F R A ST R U C T U R E

Weitere ähnliche Inhalte

Was ist angesagt?

Openshift Container Platform
Openshift Container PlatformOpenshift Container Platform
Openshift Container PlatformDLT Solutions
 
Terraform -- Infrastructure as Code
Terraform -- Infrastructure as CodeTerraform -- Infrastructure as Code
Terraform -- Infrastructure as CodeMartin Schütte
 
Cloud Native Application
Cloud Native ApplicationCloud Native Application
Cloud Native ApplicationVMUG IT
 
OpenStack Best Practices and Considerations - terasky tech day
OpenStack Best Practices and Considerations  - terasky tech dayOpenStack Best Practices and Considerations  - terasky tech day
OpenStack Best Practices and Considerations - terasky tech dayArthur Berezin
 
OpenShift Overview
OpenShift OverviewOpenShift Overview
OpenShift Overviewroundman
 
OpenShift Virtualization- Technical Overview.pdf
OpenShift Virtualization- Technical Overview.pdfOpenShift Virtualization- Technical Overview.pdf
OpenShift Virtualization- Technical Overview.pdfssuser1490e8
 
OpenStack Architecture
OpenStack ArchitectureOpenStack Architecture
OpenStack ArchitectureMirantis
 
Open shift 4 infra deep dive
Open shift 4    infra deep diveOpen shift 4    infra deep dive
Open shift 4 infra deep diveWinton Winton
 
The Qa Testing Checklists for Successful Cloud Migration
The Qa Testing Checklists for Successful Cloud MigrationThe Qa Testing Checklists for Successful Cloud Migration
The Qa Testing Checklists for Successful Cloud MigrationTestingXperts
 
GitOps with Amazon EKS Anywhere by Dan Budris
GitOps with Amazon EKS Anywhere by Dan BudrisGitOps with Amazon EKS Anywhere by Dan Budris
GitOps with Amazon EKS Anywhere by Dan BudrisWeaveworks
 
Introduction to OpenStack
Introduction to OpenStackIntroduction to OpenStack
Introduction to OpenStackEdureka!
 
Amazon EKS - Elastic Container Service for Kubernetes
Amazon EKS - Elastic Container Service for KubernetesAmazon EKS - Elastic Container Service for Kubernetes
Amazon EKS - Elastic Container Service for KubernetesAmazon Web Services
 

Was ist angesagt? (20)

Openshift Container Platform
Openshift Container PlatformOpenshift Container Platform
Openshift Container Platform
 
Terraform -- Infrastructure as Code
Terraform -- Infrastructure as CodeTerraform -- Infrastructure as Code
Terraform -- Infrastructure as Code
 
Cloud Native Application
Cloud Native ApplicationCloud Native Application
Cloud Native Application
 
Terraform
TerraformTerraform
Terraform
 
Terraform Basics
Terraform BasicsTerraform Basics
Terraform Basics
 
OpenStack Best Practices and Considerations - terasky tech day
OpenStack Best Practices and Considerations  - terasky tech dayOpenStack Best Practices and Considerations  - terasky tech day
OpenStack Best Practices and Considerations - terasky tech day
 
OpenShift Overview
OpenShift OverviewOpenShift Overview
OpenShift Overview
 
OpenShift Virtualization- Technical Overview.pdf
OpenShift Virtualization- Technical Overview.pdfOpenShift Virtualization- Technical Overview.pdf
OpenShift Virtualization- Technical Overview.pdf
 
OpenStack Architecture
OpenStack ArchitectureOpenStack Architecture
OpenStack Architecture
 
MAAS High Availability Overview
MAAS High Availability OverviewMAAS High Availability Overview
MAAS High Availability Overview
 
Open shift 4 infra deep dive
Open shift 4    infra deep diveOpen shift 4    infra deep dive
Open shift 4 infra deep dive
 
The Qa Testing Checklists for Successful Cloud Migration
The Qa Testing Checklists for Successful Cloud MigrationThe Qa Testing Checklists for Successful Cloud Migration
The Qa Testing Checklists for Successful Cloud Migration
 
GitOps with Amazon EKS Anywhere by Dan Budris
GitOps with Amazon EKS Anywhere by Dan BudrisGitOps with Amazon EKS Anywhere by Dan Budris
GitOps with Amazon EKS Anywhere by Dan Budris
 
Why to Cloud Native
Why to Cloud NativeWhy to Cloud Native
Why to Cloud Native
 
Docker Container
Docker ContainerDocker Container
Docker Container
 
infrastructure as code
infrastructure as codeinfrastructure as code
infrastructure as code
 
Containers 101
Containers 101Containers 101
Containers 101
 
Introduction to OpenStack
Introduction to OpenStackIntroduction to OpenStack
Introduction to OpenStack
 
Amazon EKS - Elastic Container Service for Kubernetes
Amazon EKS - Elastic Container Service for KubernetesAmazon EKS - Elastic Container Service for Kubernetes
Amazon EKS - Elastic Container Service for Kubernetes
 
Meetup 23 - 02 - OVN - The future of networking in OpenStack
Meetup 23 - 02 - OVN - The future of networking in OpenStackMeetup 23 - 02 - OVN - The future of networking in OpenStack
Meetup 23 - 02 - OVN - The future of networking in OpenStack
 

Ähnlich wie RedHat OpenStack Platform Overview

Getting Started with Apache CloudStack
Getting Started with Apache CloudStackGetting Started with Apache CloudStack
Getting Started with Apache CloudStackJoe Brockmeier
 
CloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWestCloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWestke4qqq
 
Cloud Architect Alliance #15: Openstack
Cloud Architect Alliance #15: OpenstackCloud Architect Alliance #15: Openstack
Cloud Architect Alliance #15: OpenstackMicrosoft
 
Sanger, upcoming Openstack for Bio-informaticians
Sanger, upcoming Openstack for Bio-informaticiansSanger, upcoming Openstack for Bio-informaticians
Sanger, upcoming Openstack for Bio-informaticiansPeter Clapham
 
Climb Technical Overview
Climb Technical OverviewClimb Technical Overview
Climb Technical OverviewArif Ali
 
Openstack presentation
Openstack presentationOpenstack presentation
Openstack presentationSankalp Jain
 
What is the OpenStack Platform? By Peter Dens - Kangaroot
What is the OpenStack Platform? By Peter Dens - KangarootWhat is the OpenStack Platform? By Peter Dens - Kangaroot
What is the OpenStack Platform? By Peter Dens - KangarootKangaroot
 
State of the Container Ecosystem
State of the Container EcosystemState of the Container Ecosystem
State of the Container EcosystemVinay Rao
 
Storage as a service and OpenStack Cinder
Storage as a service and OpenStack CinderStorage as a service and OpenStack Cinder
Storage as a service and OpenStack Cinderopenstackindia
 
Introduction to OpenStack Storage
Introduction to OpenStack StorageIntroduction to OpenStack Storage
Introduction to OpenStack StorageNetApp
 
Cloudstack for beginners
Cloudstack for beginnersCloudstack for beginners
Cloudstack for beginnersJoseph Amirani
 
The Future of SDN in CloudStack by Chiradeep Vittal
The Future of SDN in CloudStack by Chiradeep VittalThe Future of SDN in CloudStack by Chiradeep Vittal
The Future of SDN in CloudStack by Chiradeep Vittalbuildacloud
 
Running an openstack instance
Running an openstack instanceRunning an openstack instance
Running an openstack instancezokahn
 
Hacking apache cloud stack
Hacking apache cloud stackHacking apache cloud stack
Hacking apache cloud stackNitin Mehta
 
What is coming for VMware vSphere?
What is coming for VMware vSphere?What is coming for VMware vSphere?
What is coming for VMware vSphere?Duncan Epping
 
Deep Dive: OpenStack Summit (Red Hat Summit 2014)
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Deep Dive: OpenStack Summit (Red Hat Summit 2014)
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Stephen Gordon
 
Cloud orchestration major tools comparision
Cloud orchestration major tools comparisionCloud orchestration major tools comparision
Cloud orchestration major tools comparisionRavi Kiran
 

Ähnlich wie RedHat OpenStack Platform Overview (20)

Getting Started with Apache CloudStack
Getting Started with Apache CloudStackGetting Started with Apache CloudStack
Getting Started with Apache CloudStack
 
CloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWestCloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWest
 
Txlf2012
Txlf2012Txlf2012
Txlf2012
 
Cloud Architect Alliance #15: Openstack
Cloud Architect Alliance #15: OpenstackCloud Architect Alliance #15: Openstack
Cloud Architect Alliance #15: Openstack
 
OpenStack and Windows
OpenStack and WindowsOpenStack and Windows
OpenStack and Windows
 
Sanger, upcoming Openstack for Bio-informaticians
Sanger, upcoming Openstack for Bio-informaticiansSanger, upcoming Openstack for Bio-informaticians
Sanger, upcoming Openstack for Bio-informaticians
 
Flexible compute
Flexible computeFlexible compute
Flexible compute
 
Climb Technical Overview
Climb Technical OverviewClimb Technical Overview
Climb Technical Overview
 
Openstack presentation
Openstack presentationOpenstack presentation
Openstack presentation
 
What is the OpenStack Platform? By Peter Dens - Kangaroot
What is the OpenStack Platform? By Peter Dens - KangarootWhat is the OpenStack Platform? By Peter Dens - Kangaroot
What is the OpenStack Platform? By Peter Dens - Kangaroot
 
State of the Container Ecosystem
State of the Container EcosystemState of the Container Ecosystem
State of the Container Ecosystem
 
Storage as a service and OpenStack Cinder
Storage as a service and OpenStack CinderStorage as a service and OpenStack Cinder
Storage as a service and OpenStack Cinder
 
Introduction to OpenStack Storage
Introduction to OpenStack StorageIntroduction to OpenStack Storage
Introduction to OpenStack Storage
 
Cloudstack for beginners
Cloudstack for beginnersCloudstack for beginners
Cloudstack for beginners
 
The Future of SDN in CloudStack by Chiradeep Vittal
The Future of SDN in CloudStack by Chiradeep VittalThe Future of SDN in CloudStack by Chiradeep Vittal
The Future of SDN in CloudStack by Chiradeep Vittal
 
Running an openstack instance
Running an openstack instanceRunning an openstack instance
Running an openstack instance
 
Hacking apache cloud stack
Hacking apache cloud stackHacking apache cloud stack
Hacking apache cloud stack
 
What is coming for VMware vSphere?
What is coming for VMware vSphere?What is coming for VMware vSphere?
What is coming for VMware vSphere?
 
Deep Dive: OpenStack Summit (Red Hat Summit 2014)
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Deep Dive: OpenStack Summit (Red Hat Summit 2014)
Deep Dive: OpenStack Summit (Red Hat Summit 2014)
 
Cloud orchestration major tools comparision
Cloud orchestration major tools comparisionCloud orchestration major tools comparision
Cloud orchestration major tools comparision
 

Kürzlich hochgeladen

"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024The Digital Insurer
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...apidays
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Orbitshub
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesrafiqahmad00786416
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Victor Rentea
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024The Digital Insurer
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistandanishmna97
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Angeliki Cooney
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamUiPathCommunity
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 

Kürzlich hochgeladen (20)

"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 

RedHat OpenStack Platform Overview

  • 1. R E D H AT O P E N STAC K P L AT F O R M OV E RV I E W I N N OVAT I O N S D E V E LO P M E N T L A B D M Y T R O H A N Z H E LO R E D H AT ACC R E D I T E D P R O F E S S I O N A L R E D H AT S A L E S E N G I N E E R S P E C I A L I ST - C LO U D I N F R A ST R U C T U R E
  • 2. G E N E R A L OV E RV I E W
  • 3. • The OpenStack project is an open source cloud computing platform that supports all types of cloud environments. The project aims for simple implementation, massive scalability, and a rich set of features. Cloud computing experts from around the world contribute to the project. • OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a variety of complementary services. Each service offers an Application Programming Interface (API) that facilitates this integration. • This guide covers step-by-step deployment of the major OpenStack services using a functional example architecture suitable for new users of OpenStack with sufficient Linux experience. This guide is not intended to be used for production system installations, but to create a minimum proof-of-concept for the purpose of learning about OpenStack. 3
  • 4. O P E N STAC K M A P 4
  • 5. O P E N STAC K CO M P O N E N T S • Compute (Nova) • Networking (Neutron) • Block Storage (Cinder) • Object Storage (Swift) • VM Image Storage (Glance) • Identity and Access Control (Keystone) 5 • Orchestration Engine (Heat) • Telemetry (Ceilometer) • Bare Metal for Tenants (Ironic) • Dashboard (Horizon) • Data Processing (Sahara) • Deployment and Management (Director)
  • 6. OV E RV I E W OpenStack Connects Two Worlds 6
  • 7. OV E RV I E W • Tenants are actual IaaS cloud users • Consume services enabled in the cloud • See only their own and shared cloud resources • Isolated from other tenants • Do not have a view to the cloud infrastructure • Operators are users with special privileges • Often the same role that has root access to the systems • Configure, monitor, and maintain OpenStack cloud for tenants • Aware about cloud infrastructure and external network and storage environment 7
  • 8. OV E RV I E W Cloud Interfaces • Tenants and operators use the same interfaces: • OpenStack Dashboard (Horizon) • CLI tools • REST APIs • Libraries (such as os_cloud in Ansible or boto in Python) • OpenStack policy engine filters which API calls require administrative privileges 8
  • 9. CO M P U T E ( N OVA ) 9 T E N A N T V I E W O P E R ATO R V I E W • I need VMs, anytime • How many can I have? • It must be secure • SSH and VNC, please • I have hardware capacity available • This is how you consume it • I set usage quotas • I design for performance and scalability
  • 10. CO M P U T E ( N OVA ) 10 T E N A N T V I E W O P E R ATO R V I E W • Similar to Amazon EC2 • Self-service VMs: Boot an instance of a selected flavor (vCPU, RAM, disk size), OS image (from Glance), SSH key pair, host aggregate or availability zone (AZ), custom metadata, user data, security groups, with/without ephemeral disk • Reboot, stop, resize, terminate • See the console log of this instance, open VNC/ SPICE session, change VM root password (if OS supports) • Reserve, assign, and release floating IPs • Manage key pairs and security groups • Check quota usage • Select which Neutron network or port • Other Neutron/Cinder shortcuts for network and volume management • No need to manage hypervisors individually, due to distributed design of OpenStack, at any scale • Supports KVM and VMware (vCenter) • Defines which choices are available to tenants: flavors offering specific capabilities and carefully planned capacity and overcommit ratios • Easier maintenance and operations with support for node evacuation, mark “host down,” and instance live-migration • Define host aggregates and AZs with specific metadata to allow advanced scheduling and request filtering • Set NFV-specific flavors, including vCPU pinning, large pages, vCPU, RAM, and I/O device NUMA awareness, SR-IOV/PCI passthrough • Instance HA, transparent to tenants, if enabled
  • 11. CO M P U T E ( N OVA ) 11
  • 12. N E T WO R K I N G ( N E U T R O N ) 12 T E N A N T V I E W O P E R ATO R V I E W • I need my own network, isolated from others • Some private IPs, some public IPs • These are my QoS specs • Let me share networks with others • I design a network overlay and provide external access • I have very few public IPs • I set rules, policies, quotas • With SDN, I can centrally manage and monitor it all
  • 13. N E T WO R K I N G ( N E U T R O N ) 13 T E N A N T V I E W O P E R ATO R V I E W • Similar to Amazon VPC, ELB • Create, Report, Update, Delete (CRUD) networks, subnets and ports, for basic L2 and L3 with IP address management (DHCP) • Define a tenant network (overlay) • Additionally: • Provider networks • Quotas • Security groups (per port) • East/West L3 routing with tenant-defined routers • External gateway, NAT, floating IPs • Load balancing, VPN and firewall • IPv6 tenant network management • QoS (rate limit policies) per port, per network • RBAC for granular sharing of tenant networks • Defines provider networks, manually set up in Neutron by the operator, representing pre-existing networks (i.e. VLAN), useful for pointing to corporate DNS or gateways with multiple routes • Multiple simultaneous L2 technologies on a single installation via ML2 • Default Open vSwitch, or choose from dozens of commercial SDN vendors • Configures SSL/TLS back end for LBaaS • Define floating IP ranges, normally for publicly routable IPv4 addresses • Offer/delegate IPv6 tenant networks (SLAAC, DHCP) • Define and enforce QoS (currently only egress flows) • VXLAN offloading to HW available (up to 4x throughput) • Distributed Virtual Routing (DVR) for better scalability • L2Pop and responder to mitigate ARP flooding at scale
  • 14. N E T WO R K I N G ( N E U T R O N ) 14
  • 15. B LO C K STO R AG E ( C I N D E R ) 15 T E N A N T V I E W O P E R ATO R V I E W • Too much data in my VMs! • I need permanent storage • Can I snapshot, back up, and roll back? • Encrypted, please • I constantly buy storage • I must allocate space to tenants • I can combine different tiers of technologies (NAS, SAN) • I set rules, policies, quotas
  • 16. B LO C K STO R AG E ( C I N D E R ) 16 T E N A N T V I E W O P E R ATO R V I E W • Similar to Amazon EBS • CRUD operations with block devices • Add hard drives to an instance • Persistent storage, can be cloned, snapshotted, replicated, or imported/exported to another AZ (also public storage like Google Cloud Storage) • Encryption available via LUKS (if enabled by Ops) • Hot-unplug from one instance and re-attach to another instance • Non-disruptive and incremental snapshot: ideal for backup/restore and DR use cases • QoS available (total IOPS) • If exposed, vendor-specific features (mirroring, compression, replication, and thin provisioning) • Uses Red Hat Ceph storage by default • Multiple back ends (LVM, iSCSI, NFS, ScaleIO, etc.) including proprietary ones with more specific features • Faster provisioning via oversubscription, thin provisioning, and generic image cache • Simplified operations, DR and backup with generic volume migration and replication (sync/async, with N number of replicas) between different storage back ends • Private volume types for premium levels of service (SSD, thick provisioned) • iSCSI multi-path support for extra reliability
  • 17. B LO C K STO R AG E ( C I N D E R ) 17
  • 18. O B J E C T STO R AG E ( S W I F T ) 18 T E N A N T V I E W O P E R ATO R V I E W • My application needs object storage (files, media) • I can use HTTP(s) • Stateless, please! No time for mounting file systems • I offer a private S3- like experience • I must scale without limits • I want advanced features
  • 19. O B J E C T STO R AG E ( S W I F T ) 19 T E N A N T V I E W O P E R ATO R V I E W • Similar to Amazon S3 (a modern version of FTP, WebDAV) • CRUD objects in containers, per account • Ideal for storing static objects (media, web files, email) • Only useful if the application understands the Swift/S3 API • Also useful for storing Glance image backups • Not meant to be used as POSIX file system • Fast-POST allows fast, efficient updates of metadata without uploading content again • Very few dependencies with other OpenStack modules, mostly Keystone for RBAC • Scales horizontally up to petabytes • Replication for global clusters • Advanced Swift features: middleware for API processing, temporary URLs, URL rewrite • Swift requires its own storage space, not integrated with Ceph • Reduced availability for further storage efficiency with erasure coding
  • 20. O B J E C T STO R AG E ( S W I F T ) 20
  • 21. V M I M AG E STO R AG E ( G L A N C E ) 21 T E N A N T V I E W O P E R ATO R V I E W • Which operating systems can I use? • This is my own version, store it just for me • Is the OS image genuine? • Take this VMware template and import it • Only approved OS can be used in my cloud • Centrally offer updated OS • Leverage storage integration to reduce network usage
  • 22. V M I M AG E STO R AG E ( G L A N C E ) 22 T E N A N T V I E W O P E R ATO R V I E W • Similar to Amazon AMIs • CRUD images (VM templates, a bootable OS) and snapshots (VM backup) • Private or public images • Upload from file or from URL • Metadata can host any key-value pair, useful for documenting OS version, date, etc. • Multiple disk formats (QCOW2, RAW, ISO, VDI, VMDK) and container formats (bare, OVF, AMI, ARI) • Checksum and signature verification for extra security • Best practice: Offer “golden images” to tenants via public Glance images • Store images using Cinder as back end • If not using Ceph, director configures Swift as a Glance image store • If using Ceph, Glance leverages advanced RBD features (cache, thin provisioning, immediate snapshot) • Automatic Nova/libvirt/KVM optimization depending on guest OS via os_name attribute
  • 23. V M I M AG E STO R AG E ( G L A N C E ) 23
  • 24. I D E N T I TY A N D ACC E S S CO N T R O L ( K E Y STO N E ) 24 T E N A N T V I E W O P E R ATO R V I E W • I am not a hacker, believe me! • My boss just gave me permission to ask for VMs • Where are all the services? • I am a project lead, I must be admin of my project • Who are you? • Let me validate with LDAP • I must integrate with my company’s SSO • I must secure entry points with TLS certificates
  • 25. I D E N T I TY A N D ACC E S S CO N T R O L ( K E Y STO N E ) 25 T E N A N T V I E W O P E R ATO R V I E W • Similar to Amazon IAM • Authenticates and gives authorization to users. Provides session tokens that are used for all OpenStack actions • CRUD user, tenants (project), roles (as long as Operator allows it) • Change password, also download credentials file (RC) with EC2 keys • Discover OpenStack endpoints via catalog • Kerberos for SSO in both Web (Horizon) and in CLI on client systems with SSSD • Federated Identity: Same user/password across multiple OpenStack providers • CRUD user, tenants (project), roles, and domains (for v3) for better RBAC • SAML Federation for authentication with external providers (pre-existing) or other clouds, via Red Hat SSO • Multiple identity back ends: LDAP, ActiveDirectory, FreeIPA, PAM, etc. • Preferred authorization back end is MariaDB • Lightweight tokens (Fernet) for better performance and scalability • Logs in standard CADF auditable format • Public endpoint protection with SSL/TLS
  • 26. I D E N T I TY A N D ACC E S S CO N T R O L ( K E Y STO N E ) 26
  • 27. O R C H E ST R AT I O N E N G I N E ( H E AT ) 27 T E N A N T V I E W O P E R ATO R V I E W • This is the blueprint of my application deployment: dependencies, config, etc. • Can you run this for me? • Scale it out when this threshold is reached • To compete with public clouds, I should offer an orchestration engine • Auto-scaling, load balancers, and quotas allow me to monitor and predict demand
  • 28. O R C H E ST R AT I O N E N G I N E ( H E AT ) 28 T E N A N T V I E W O P E R ATO R V I E W • Similar to Amazon CloudFormation • CRUD templates (stacks) that can be stopped and resumed • Instructs OpenStack to automate deployment of resources as defined in HOT or CloudFormation (CFN) languages • Well-defined and mature, HOT offers more modularity and flexibility improvements (i.e., resource chains, pre- delete hooks, etc.) • Very useful when combined with Ceilometer and LBaaS. Example use case is instance auto-scaling by creating another VM when cluster load reaches 80% CPU • Heat may require minor tuning to ensure enough CPU and RAM are assigned to it • Can offer shared templates, approved by IT • Excellent integration with CloudForms to create an advanced service catalog to end users, with policies and customized quota and capacity management
  • 29. O R C H E ST R AT I O N E N G I N E ( H E AT ) 29
  • 30. T E L E M E T R Y ( C E I LO M E T E R ) 30 T E N A N T V I E W O P E R ATO R V I E W • How much CPU, RAM, disk am I using per hour? • Notify me of any alarms here • I wish I could charge back/show back how much every user is consuming • This is useful for my own internal usage!
  • 31. T E L E M E T R Y ( C E I LO M E T E R ) 31 T E N A N T V I E W O P E R ATO R V I E W • Similar to Amazon CloudWatch • Metrics (CPU, RAM usage) and events (e.g., instance is created) can be listed • Alarms (e.g., CPU threshold reached) can also be triggered. Alarm thresholds can be custom defined, all via the Aodh API (pronounced “hey”) • Querying for historical values is available • Historically, Ceilometer required tuning at scale to allow for tenants polling historical values. MongoDB was the only back end • Now Ceilometer offers much better performance and scalability thanks to the split of its components • Gnocchi stores/indexes time-series metrics • Aodh does the same for alarms • Panko is the event engine • Connects with CloudForms for capacity monitoring and management
  • 32. T E L E M E T R Y ( C E I LO M E T E R ) 32
  • 33. B A R E M E TA L F O R T E N A N T S ( I R O N I C ) 33 T E N A N T V I E W O P E R ATO R V I E W • I need a physical VM for a while, with a generic OS • I do not have many security or isolation concerns, nor network protection needs • I have some spare nodes in a separate cluster, with a shared network • I will offer them to trusted users groups • I will provide the OS image
  • 34. B A R E M E TA L F O R T E N A N T S ( I R O N I C ) 34 T E N A N T V I E W O P E R ATO R V I E W • Similar to Amazon Dedicated EC2 Servers • Nova commands are used against an existing bare-metal Host Aggregate • After Ironic reserves a bare-metal node, Nova is used to provision the instance • Only works with Glance images tagged hypervisor_type=ironic • Can deploy Linux or Windows machines (requires extra steps) • Allocates a pool of nodes to be entirely allocated to certain tenants, on demand • Requires careful design for tenant-facing service (network isolation, security, etc.) • Defines Nova Host Aggregates with key- value baremetal and a flavor with key hypervisor_type="ironic" • Quotas and capacity planning are needed • Good integration with most hardware vendors: Dell, Cisco, HP, etc. • Introspection process to detect HW capabilities • Requires many Nova and Neutron changes (i.e. Flat Networking for PXE provisioning)
  • 35. B A R E M E TA L F O R T E N A N T S ( I R O N I C ) 35
  • 36. DA S H B OA R D ( H O R I ZO N ) 36 T E N A N T V I E W O P E R ATO R V I E W • I need a UI to manage my workloads or troubleshoot • I do not like the CLI • I want to see my Heat topologies • Quickly display my quota usage and default options • I want an admin panel • I want quick access to my Red Hat Insights account • I want to see all Neutron networks and routers
  • 37. DA S H B OA R D ( H O R I ZO N ) 37
  • 38. DATA P R O C E S S I N G ( S A H A R A ) 38 T E N A N T V I E W O P E R ATO R V I E W • I need a Hadoop cluster for a few hours • I need to try different big data platforms • I want my clusters to scale automatically • I do not have the manpower to customize big data platforms to all of my tenants • I will get third-party providers and deliver their stacks as a service
  • 39. DATA P R O C E S S I N G ( S A H A R A ) 39 T E N A N T V I E W O P E R ATO R V I E W • Similar to Amazon Elastic MapReduce (EMR) • Run Hadoop workloads in a few clicks without expertise in Hadoop operations • Simple parameters such as Hadoop version, cluster topology, and node count • Data can be hosted elsewhere (S3, Swift, etc.) • Rapid provisioning of Hadoop clusters for Dev and QA • “Analytics-as-a-Service" for bursty or ad hoc workloads • Utilization of unused compute power from a general-purpose OpenStack cloud to perform data processing tasks • Supports Hadoop distributions on CentOS and Red Hat Enterprise Linux 7: • Cloudera • HortonWorks • Ambari • MapR • Plugin Image Packaging Tool to validate custom plug-ins, package them, and generate clusters from clean, versioned, OS-only images
  • 40. DATA P R O C E S S I N G ( S A H A R A ) 40
  • 41. S H A R E D F I L E SY ST E M ( M A N I L A ) 41 T E N A N T V I E W O P E R ATO R V I E W • I need a network folder to share files between VMs • Sometimes I share it with other users in my team • I do not want to manage the folder (permissions, quotas) • I do not have the time to create temporary shares and enable network security • I wish I could automatically leverage OpenStack users and groups
  • 42. S H A R E D F I L E SY ST E M ( M A N I L A ) 42 T E N A N T V I E W O P E R ATO R V I E W • Similar to Amazon Elastic File System but not just NFS, also CIFS • Creates a network file share, available in a Neutron shared network • Can be shared with other tenants (RBAC), including mappings to LDAP entities • User-defined quotas, policies, replication, snapshots, and extend/shrink capacity • VM operating system must connect to the share using whatever network protocol has been set (NFS, CIFS) • Significantly reduces operational burden • Delegates storage management to end users with clearly defined limits and boundaries • NFS (access by IP address or subnet) • CIFS (authentication by user) • In OpenStack Platform, Manila is GA, deployed via director • Only NetApp driver is GA • CephFS driver is Tech Preview
  • 43. S H A R E D F I L E SY ST E M ( M A N I L A ) 43
  • 44. D I R E C TO R
  • 45. R E D H AT O P E N STAC K P L AT F O R M D I R E C TO R • Leverages best practices and reference architectures from extensive field experience • Safely upgrade and update production OpenStack deployments • Out-of-the-box Control Plane HA thanks to Pacemaker • External load balancer support • Ceph deployment and configuration as storage back end • Can connect to existing Ceph • API-driven deployment (and management) of Red Hat OpenStack Platform • Allows CloudForms integration 45
  • 46. R E D H AT O P E N STAC K P L AT F O R M D I R E C TO R • Supported partner hardware integration (Ironic, Cinder, Neutron, Manila) • Cisco UCS, Dell, Intel, HP, Fujitsu, SeaMicro, and Open CloudServer • Cisco Nexus 1000v (networking) and other SDNs • Netapp Data ONTAP (Cinder, Manila storage) and other storage • Configuration stored as YAML code (VLAN, IP ranges) • Director CLI unified with standard OpenStack interfaces 46
  • 47. D I R E C TO R Building Scalable Clouds • Scales to hundreds of nodes, automating entire hardware life cycle • Ready state configuration for selected hardware that automatically configures RAID, BIOS, network bonding, etc. • Pattern-based automatic discovery and selection of appropriate nodes from hardware inventory • Automated Health Check can execute performance test before deployment to identify possible misconfigurations or faulty servers • Ability to validate installation post deployment using Tempest • Easy to scale up and down—add compute and storage capacity (see deployment limits) • Enhanced management via CloudForms for both tenants and administrators 47
  • 48. T R I P L E O OpenStack on OpenStack • Director is based on upstream OpenStack deployment project TripleO • Operator uses a specialized OpenStack installation: undercloud • Undercloud deploys and updates production OpenStack installation: overcloud 48
  • 49. D I R E C TO R T R I P L E O 49 Life Cycle
  • 50. D I R E C TO R G R A P H I CA L U S E R I N T E R FAC E ( U N D E R C LO U D ) 50
  • 51. D I R E C TO R VA L I DAT I O N S ( U N D E R C LO U D ) • Ansible-driven solution to catch potential hardware, networking, and deployment issues to reduce deployment failures • Simplifies the burden on IT staff by providing recommended configuration solution settings when issues are detected • Helps customers to achieve production-ready deployments through entire process • Pre-installation (prior to starting deployment) • Post-installation (checks after deployment) • Upstream project: tripleo-validations 51
  • 52. D E FAU LT R O L E S ( R E F. A R C H ) • Five default roles: • Controller • Compute • BlockStorage (Cinder) • CephStorage • ObjectStorage (Swift) • Operators can easily customize and create their own roles • Further tuning available as post- installation scripts 52
  • 53. CO M P O S A B L E R O L E S A N D 
 C U STO M S E RV I C E S • Distribute services specific to your data center and architecture requirements to individual node or group of nodes 53
  • 54. CO N T R O L P L A N E H I G H AVA I L A B I L I TY 54
  • 55. N F V I N STA L L AT I O N S W I T H D I R E C TO R • Director allows operator to define advanced resource partitioning at deploy time with control of NUMA/CPU pinning, Hugepages, IRQ isolation, SR-IOV, and OVS+DPDK all via composable roles 55 S R - I OV D E P LOY M E N T ( D P D K G U E ST ) OV S + D P D K D E P LOY M E N T ( D P D K G U E ST )
  • 56. I N T E G R AT I O N S
  • 57. R E D H AT E N T E R P R I S E L I N U X 57
  • 58. C E RT I F I E D PA RT N E R P LU G - I N S 58
  • 59. S O F T WA R E - D E F I N E D N E T WO R K I N G • Dozens of software-defined networking (SDN) partners, Neutron certified • Director can automatically configure Cisco, Nuage, PLUMgrid • More to come • Two main networking models: • Software-centric; uses general-purpose hardware • Hardware-centric; requires specific network hardware • Can extend Neutron via ML2 drivers, core plug-ins, or advanced services 59
  • 60. R E D H AT C LO U D F O R M S • Two complementary options • OpenStack workload management • OpenStack infrastructure management • OpenStack workload management • Tenant- and operator-facing • Self-service console • Orchestration • OpenStack infrastructure management • Operator-facing • Deployment details, service monitoring, drift history • Scaling 60
  • 61. R E D H AT C E P H STO R AG E • Red Hat Ceph Storage is included with Red Hat OpenStack Platform • Support for up to 64TB • Red Hat OpenStack Platform director supports Ceph Storage deployment • Ceph is default back end for OpenStack services • Ceph cluster can be installed and updated by director • Overcloud can be connected to externally managed Ceph cluster • Ceph RADOS Object Gateway can be enabled 61
  • 62. P E R F O R M A N C E M O N I TO R I N G 62
  • 63. AVA I L A B I L I TY M O N I TO R I N G 63
  • 64. C E N T R A L I Z E D LO G G I N G 64
  • 65. R E D H AT I D E N T I TY M A N AG E M E N T 65
  • 66. R E D H AT S AT E L L I T E • Advanced Management of Node Content • Subscription management • Review of content (packages) on nodes • New content notification, errata overview • Manage which packages are available to nodes 66
  • 67. O P E N S H I F T CO N TA I N E R P L AT F O R M 67
  • 68. A N S I B L E E N G I N E A N D 
 A N S I B L E TO W E R 68
  • 69. R E D H AT C LO U D S U I T E 69
  • 70. H Y P E RV I S O R S
  • 71. C H O O S I N G A H Y P E RV I S O R A hypervisor provides software to manage virtual machine access to the underlying hardware. The hypervisor creates, manages, and monitors virtual machines. OpenStack Compute (nova) supports many hypervisors to various degrees, including: • Ironic • KVM • LXC • QEMU • VMware ESX/ESXi • Xen (using libvirt) • XenServer • Hyper-V • PowerVM • UML • Virtuozzo • zVM 71
  • 72. H A R D WA R E
  • 73. H A R D WA R E R E Q U I R E M E N T S 73
  • 74. H A R D WA R E R E CO M M E N DAT I O N S IOPS-OPTIMIZED SOLUTIONS With the growing use of flash storage, IOPS-intensive workloads are increasingly being hosted on Ceph clusters to let organizations emulate high-performance public cloud solutions with private cloud storage. These workloads commonly involve structured data from MySQL-, MariaDB-, or PostgreSQL-based applications. OSDs are typically hosted on NVMe SSDs with co-located Ceph write journals. Typical servers are listed in Table 2, and include the following elements: • CPU. 10 cores per NVMe SSD, assuming a 2 GHz CPU. • RAM. 16GB baseline, plus 2GB per OSD. • Networking. 10 Gigabit Ethernet (GbE) per 12 OSDs (each for client- and cluster-facing networks). • OSD media. High-performance, high-endurance enterprise NVMe SSDs. • OSDs. Four per NVMe SSD. • Journal media. High-performance, high-endurance enterprise NVMe SSD, co-located with OSDs. • Controller. Native PCIe bus. 74
  • 75. H A R D WA R E R E CO M M E N DAT I O N S THROUGHPUT-OPTIMIZED SOLUTIONS Throughput-optimized Ceph solutions are usually centered around semi-structured or unstructured data. Large-block sequential I/O is typical. Storage media on OSD hosts is commonly HDDs with write journals on SSD-based volumes. Typical server elements include: • CPU. 0.5 cores per HDD, assuming a 2 GHz CPU. • RAM. 16GB baseline, plus 2GB per OSD. • Networking. 10 Gigabit Ethernet (GbE) per 12 OSDs (each for client- and cluster-facing networks). • OSD media. 7,200 RPM enterprise HDDs. • OSDs. One per HDD. • Journal media. High-endurance, high-performance enterprise serial-attached SCSI (SAS) or NVMe SSDs. • OSD-to-journal ratio. 4-5:1 for an SSD journal, or 12-18:1 for an NVMe journal. • Host bus adapter (HBA). just a bunch of disks (JBOD). 75
  • 76. H A R D WA R E R E CO M M E N DAT I O N S COST/CAPACITY-OPTIMIZED SOLUTIONS Cost/capacity-optimized solutions typically focus on higher capacity, or longer archival scenarios. Data can be either semi-structured or unstructured. Workloads include media archives, big data analytics archives, and machine image backups. Large-block sequential I/ O is typical. For greater cost effectiveness, OSDs are usually hosted on HDDs with Ceph write journals co-located on the HDDs. Solutions typically include the following elements: • CPU. 0.5 cores per HDD, assuming a 2 GHz CPU. • RAM. 16GB baseline, plus 2GB per OSD. • Networking. 10 GbE per 12 OSDs (each for client- and cluster-facing networks). • OSD media. 7,200 RPM enterprise HDDs. • OSDs. One per HDD. • Journal media. Co-located on the HDD. • HBA. JBOD. 76
  • 77. N E T WO R K I N G O P T I O N 1 : 
 P R OV I D E R N E T WO R K S 77
  • 78. N E T WO R K I N G O P T I O N 2 : 
 S E L F-S E RV I C E N E T WO R K 78
  • 79. N E T WO R K L AYO U T • The example architectures assume use of the following networks: • Managed on 10.0.0.0/24 with gateway 10.0.0.1
 This network requires a gateway to provide Internet access to all nodes for administrative purposes such as package installation, security updates, DNS, and NTP. • Provider on 203.0.113.0/24 with gateway 203.0.113.1
 This network requires a gateway to provide Internet access to all instances in your OpenStack environment. 79
  • 80. S U B S C R I P T I O N
  • 81. R E D H AT O P E N STAC K P L AT F O R M O F F E R I N G S Red Hat OpenStack Platform subscriptions come in two versions: 1. Red Hat OpenStack Platform 2. Red Hat OpenStack Platform (without guest operating system) The only difference between the two is that the first version includes the right to use Red Hat Enterprise Linux® as the guest operating system in an unlimited number of virtual machines hosted by OpenStack. Both versions include the ability to run Red Hat OpenStack Platform. A Red Hat OpenStack Platform subscription allows you to install and run the included software on a single server with up to two populated sockets. If the server has more than two sockets, you can stack additional subscriptions on it until the number of allowed sockets is equal or greater than the number of populated sockets in the server. 81
  • 82. A S S E M B L I N G YO U R S U B S C R I P T I O N O R D E R To determine the Red Hat OpenStack Platform subscription needed for each server in a private cloud deployment, look at the role the server will perform. The Red Hat OpenStack Platform deployment model includes two main concepts: The undercloud and the overcloud. 82
  • 83. U N D E R C LO U D The undercloud installs, configures, and manages the overcloud. Typically, a single server is assigned the role of being the undercloud. The best practice is to install the following software components in virtual machines on the undercloud server: • Red Hat OpenStack Platform director • Red Hat CloudForms Since this server uses Red Hat OpenStack Platform and will run virtual machines using Red Hat Enterprise Linux as the guest operating system, a Red Hat OpenStack Platform subscription should be purchased. 83
  • 84. OV E R C LO U D The overcloud has all the components needed to run your private cloud. The servers that host the overcloud are usually assigned one of the following three roles: • Controller: Nodes that provide administration, networking, and high availability for the OpenStack environment. • Compute: Nodes that provide computing resources for the OpenStack environment. • Storage: Nodes that provide storage for the OpenStack environment. Each role has different subscription considerations 84
  • 85. CO N T R O L L E R For controller nodes, consider whether or not you will deploy any virtual machines on this server. If you will not deploy any virtual machines (the most likely use case), or if any virtual machines you do deploy on this server will not use Red Hat Enterprise Linux as the guest operating system, then you should purchase Red Hat OpenStack Platform (without guest OS) for that server. If you will deploy virtual machines on the controller node and you will use Red Hat Enterprise Linux as the guest operating system in those virtual machines, then you should purchase Red Hat OpenStack Platform for that server. 85
  • 86. CO M P U T E For compute nodes, consider whether or not you want to use Red Hat Enterprise Linux as the guest operating system in any of the virtual machines hosted on these servers. If you will use Red Hat Enterprise Linux as the guest operating system, then you should purchase Red Hat OpenStack Platform for that server. If you will use another operating system, such as Windows, as the guest operating system, or if you will use standalone Red Hat Enterprise Linux Server or Red Hat Enterprise Linux for Virtual Datacenters subscriptions for the guest operating system, you should purchase Red Hat OpenStack Platform (without guest OS) for that server. 86
  • 87. STO R AG E For storage nodes, consider what type of storage will be used: • Ceph storage nodes: Purchase Red Hat Ceph Storage subscriptions for these servers. • Block storage (Cinder) nodes: Purchase Red Hat OpenStack Platform (without guest operating system) subscriptions for these servers. • Object storage (Swift) nodes: Purchase Red Hat OpenStack Platform (without guest operating system) subscriptions for these servers. 87
  • 88. R E D H AT C LO U D F O R M S A version of Red Hat CloudForms is included with each Red Hat OpenStack Platform subscription. It is intended to be used as the day-two cloud management tool for Red Hat OpenStack Platform. It includes the complete feature set of Red Hat’s standalone CloudForms offering. However, it can only be used to manage virtual machines that are hosted by Red Hat OpenStack Platform. It cannot be used with any other virtualization platform. As an example, take a server using Red Hat OpenStack Platform to create and run virtual machines. The included Red Hat CloudForms can manage all the virtual machines hosted on that server. However, if the private cloud includes a mix of compute servers using Red Hat OpenStack Platform, VMware vSphere, and virtual machines hosted on Amazon EC2, the included Red Hat CloudForms subscription can only be used to manage the virtual machines being hosted on Red Hat OpenStack Platform. 88
  • 89. R E D H AT C E P H STO R AG E Red Hat OpenStack Platform and Red Hat Cloud Infrastructure subscriptions include enablement software that is needed to use Red Hat Ceph Storage with Red Hat OpenStack Platform. This enablement software includes the installation, management, and monitoring tools for Ceph. However, Red Hat Ceph Storage software needed for the storage nodes is not included. That software component is called Red Hat Ceph Storage object storage daemon (OSD). It is the OSD for the Ceph distributed file system and is responsible for storing objects on a local file system and providing access to them over the network. This software component is only available in the Red Hat Ceph Storage SKUs. To expand your Red Hat Ceph Storage capability into production, you can buy any Red Hat Ceph Storage subscription, which start at 256TB. For more information about Red Hat’s Ceph Storage solutions, visit https://redhat.com/en/ technologies/storage/ceph. 89
  • 90. L I F E - CYC L E O P T I O N S With the release of Red Hat OpenStack Platform version 10, a change was made to the life-cycle periods, based on feedback from customers. It balances the needs of customers who want access to the latest OpenStack technology as soon as it becomes available and those who want to standardize on one version for the longest possible period. To meet those needs, the life cycle for Red Hat OpenStack Platform will no longer be three years for every major new release. Instead, you can choose either a one-year (standard) or three-year (long-life) life cycle. With the three-year long-life version, you will also have the option to purchase extended life-cycle support (ELS) for up to two additional years. The life-cycle periods for version 10 and beyond are: • Version 10 (based on upstream OpenStack community version “Newton”) — three years (with the option to purchase up to two additional years). • Version 11 (based on upstream version “Ocata”) — one year. • Version 12 (based on upstream “P” version) — one year. • Version 13 (based on upstream “Q” version) — three years (with the option to purchase up to two additional years). 90
  • 91. Innovations Development Lab https://indevlab.com info@indevlab.com T H A N K YO U F O R AT T E N T I O N ! Dmytro Hanzhelo R E D H AT ACC R E D I T E D P R O F E S S I O N A L R E D H AT S A L E S E N G I N E E R S P E C I A L I ST - C LO U D I N F R A ST R U C T U R E