SlideShare ist ein Scribd-Unternehmen logo
1 von 137
Red Hat OpenShift
Fundamentals
Red Hat Openshift Fundamentals
• Getting started with Red Hat’s Openshift Container Platform
• OpenShift makes it easier to deploy applications in an enterprise
environment
• Allowing developers to roll out applications as fully operational
containers
• Allows administrators to manage the application lifecycle in a flexible
way
• So applications can be monitored and scaled as needed
Lessons
• Understanding OpenShift
• Installing OpenShift
• Getting Started with OpenShift
• Managing OpenShift Networking
• Deploying Applications
• Managing OpenShift storage
Requirements
Recommended that you are…
• Comfortable with Linux
• A bit of experience in containers & Kubernetes
Hardware Requirements:
Using MiniShift
Single VM that needs 4GB RAM & 20GB Disk Space
Full fledged OpenShift cluster
For running 3 node cluster, with 3 VM, requires 12GB of RAM and 80GB of
disk space
Introducing OpenShift
We will learn all we need to get started with OpenShift
Lesson 1: Understanding OpenShift
Objectives:
• Understanding Containers & OpenShift
• Understanding Red Hat Container Management Solution
• Understanding OpenShift in a Container Environment
• Understanding OpenShift in a DevOps environment
• Understanding OpenShift Architecture
• OpenShift vs Kubernetes (which feature is similar, and which feature
is different)
• Understanding the role of OpenShift in Hybrid Cloud environment
Understanding Containers & OpenShift
Understanding OpenShift
• OpenShift Container Platform (OCP) allows developers to easily build
an environment based on source code that you insert into the system
• Using OpenShift allows developers to bring applications to market
without any delay
• OpenShift supports code written in many programming languages
• OpenShift is a PaaS solution that is built on top of Kubernetes
• The result is a container that will be orchestrated by the integrated
Kubernetes layer
On Prem vs IaaS vs PaaS vs SaaS
Understanding Containers
• Containers are the modern-day replacement of applications that are
installed on servers
• Containers contain all dependencies that are required to run an
application and are started on top of a container engine
• Containers do not include a kernel, but run on the host OS kernel
• Docker is the most common container solution
• Docker engine is a common engine, but not the only one: example; in
RHEL 8, containers can run natively on top of the RHEL OS. It is still a
fast-moving technology that’s always subject to change.
PaaS?
• Platform as a service (PaaS) is a category of cloud computing services that
provides a platform allowing customers to develop, run, and manage
applications without the complexity of building and maintaining the
infrastructure typically associated with developing and launching an app
(Wikipedia)
• OpenShift is a PaaS solution that adds different PaaS features to a
Kubernetes/Docker environment
• Remote management
• Multitenancy
• Security
• Monitoring
• Application life-cycle management
• Auditing
Understanding Kubernetes
• Kubernetes is a portable, extensible open-source platform for
managing containerized workloads and services
• Containers needs to be orchestrated
• When running containers are running on an enterprise environment, you will
need a HA system, which needs to be orchestrated
• Created by Google, based on Google Borg, since 2014
• Kubernetes orchestrates computing, networking, and storage
infrastructure
• OpenShift is build on top of Kubernetes, so that OpenShift doesn’t
have to recreate everything, and currently Kubernetes is the de-facto
standard for container orchestration
Understanding Red Hat Container
Management Solution
Understanding Podman
• RHEL 8 includes Podman, a solution to run containers natively on top
of RHEL
• No need for Docker
• Podman is for stand-alone containers, and is useful to run individual
containers without any enterprise features
• If the host fail, the container will also fail, and no other host will take care of
the container
• Difference with Docker: Podman runs containers with random UID
and not as root
Containers Operating
System
• Containers can run on top of a full Linux
distribution
• For increased efficiency, it’s better to run
containers on top of a container OS
• Container Linux (formerly CoreOS) is a
container OS that was acquired by Red Hat
• Already integrated in OpenShift as a
container OS that has been developed for
a while
OpenShift
• OpenShift is a platform that integrates container management and
application builds in an enterprise platform
• OpenShift exists in different forms
• OKD (previously known as OpenShift Origin) – free
• OpenShift Container Platform – Red Hat Solution – commercial
• OpenShift online – Multitenant version of OpenShift with infrastructure
managed by Red Hat
• OpenShift on Public Cloud Platforms
• Azure
• AWS
• Google Cloud Platform
• IBM Cloud
Understanding OpenShift in a Container
Environment
Using OpenShift to Manage Containers
How do we manage containers?
• Kubernetes is the de facto standard for managing and orchestrating
containers
• OpenShift is not required for managing containers, but offers some
significant benefits over Kubernetes
• Strict security policies – much more secure than default Kubernetes
• Routers make it easier to access applications
• Better management of container images
• S2I – Source to Image; Developers can automatically build container from the source
code. Even can trigger a new build when the source code is changed.
Understanding OpenShift in a DevOps
Environment
Understanding CI/CD
• Continuous Integration (CI) is the integration of source code from multiple
authors into a shared source code management (SCM) repository
• Git is such an SCM repository
• Such environment supports multiple changes per day
• In OpenShift, Git push events can be captured and result in a new
containers that are automatically created
• The result is Continuous Delivery (CD), an environment where new versions
of the software are automatically deployed
• In the flow of CI/CD process, pipelines play important an important role
Understanding Pipelines
Pipelines are a representation of all steps in the CI/CD process
• Build
• Test
• Packaging
• Documentation
• Reporting
• Deployment
• Verification
Common Tools to work with Pipeline is Jenkins
Understanding OpenShift and DevOps
• For DevOps, using Infrastructure as Code is an important goal
• OpenShift goes beyond that, and offers a solution to automate the build of
containers, without needing to know anything about infrastructure
• Containers are a perfect solution to isolate the responsibilities of the
developers and operations teams
• To do so, Pipelines are integrated. Pipelines are a solution that allows
teams to automate and organize all activities required to deliver software
changes
• These pipelines are offered through integrated Jenkins Pipelines
• OpenShift supports all five stages of the DevOps application lifecycle
OpenShift and the DevOps Lifecycle
• Build: Developers can build applications quick and easy, without the need
for IT operations to set up anything
• Test: Continuous Integration (CI) is offered through built-in Jenkins CI
server and lets developers integrate code automatically with every change
• Operate: Continuous Delivery (CD) is offered using Pipelines to automate
every step of the application delivery
• Deploy: Auto-scaling features ensure that all times, the number of required
instance is available
• Monitor: Metrics, health check, and self healing ensure that the
environment stays healthy
OpenShift Architecture
Master Node:
- API
- Authentication
- Replication
- Scheduler
RHEL / Atomic
Worker Node 1
RHEL / Atomic
C1 C2 C3 C4
Worker Node 2…n
RHEL / Atomic
C1 C2 C3 C4
OpenShift vs Kubernetes
Understanding OKD
• OpenShift is using the OKD Project as upstream
• OKD = OpenShift Kubernetes Distribution
• Kubernetes is an important part of OpenShift
• OKD is a distribution of Kubernetes optimized for continuous
application development and multi-tenant deployment. OKD adds
developer and operations-centric tools on top of Kubernetes to
enable rapid application development, easy deployment and scaling,
and long-term lifecycle maintenance for small and large teams. OKD is
the upstream Kubernetes distribution embedded in Red Hat
OpenShift. (okd.io)
Understanding OpenShift on Kubernetes
• OpenShift adds features on top of Kubernetes, but uses the core
Kubernetes infrastructure
• OpenShift adds resource types to the Kubernetes environment and
stores them in Etcd
• Most OpenShift services are implemented as Docker container
• OpenShift adds xPaaS, a middleware services that can be offered as
PaaS, by adding JBoss middleware solution
• xPaaS = aPaaS, iPaaS, bpmPaaS, dvPaaS, mPaaS + OpenShift
• Some Kubernetes resource types are not available in OpenShift
Understanding the Purpose
• Kubernetes focuses on providing container orchestration
• OpenShift adds features to that:
• A build strategy to build source code
• Built in container registry
• Version control integration
• Security
Shared Resource Types
• Kubernetes and OpenShift share some resource types:
• Pods
• Minimal entity that is managed in OpenShift or Kubernetes environment
• Typically contains a container
• OpenShift doesn’t run container by themselves, but in order to run container, OpenShift manages Pods
• Usually only contains one container, but it depends on the Microservices architecture
• Namespaces
• Called projects in OpenShift
• Provides a strictly isolated environment offered by the Linux kernel
• Impossible for pods running in one namespace to interfere pods that are running in different namespace
• Deployment Config
• The configuration file that defines the application
• One of the things it does is taking care of the replication, the number of instance of an application that you want to run
• Services
• Exposing the application to the outside world
• Persistent Volume and Volume Claims
• Used for setting up storage
• Persistent storage is the external storage that you want to use in OpenShift environment
• Volume claim is the claim that the deployment config use, and put in that persistent volume
• Volume claim allows the deployment config to tell the persistent storage, “hey I need 5GB”
• Secrets
• Solution to store secret information and connect that to the pod (API keys, password, SSH keys, etc)
OpenShift Resource Types
Some resources types are unique to OpenShift
• Images
• Product that delivered by Source To Image
• In Kubernetes, usually the image is coming from Docker, or a manually created image
• OpenShift integrates the image build process
• Image Streams
• A tagged reference to image; tag can be used to assign new version numbers, etc
• Templates
• Allows you to run application in a standardized way
• Build Config
• How configuration is built in OpenShift environment
• Routes
• Solution that allows you to create a DNS, FQDN, which can be used to access the application publicly (over the
Internet, internal network, etc)
• No such thing as this resource type in Kubernetes
OpenShift in a Hybrid Cloud Environment
Understanding Hybrid Cloud
• Hybrid Cloud is a cloud that combines different types of cloud services
• This can be a private cloud vs public cloud
• But also IaaS cloud and PaaS cloud
• OpenShift is a hybrid cloud solution, as it allows you to run containers on
any IaaS cloud solution
• The IaaS cloud is a solution managing large infrastructure
• OpenShift is the solution to easily deploy an application on top of that
Infrastructure
• In an OpenShift context, the Hybrid Cloud provides ultimate flexibility by
combining containers and IaaS cloud
Understanding the IaaS Layer
• The IaaS layer offers flexibility in deploying an infrastructure
• OpenShift can be installed on a traditional physical data center
• But for more flexibility to scale up host machines in a dynamic and automated way, we need IaaS
cloud
• In IaaS, every part of the infrastructure can be automated
• Virtual machine
• Storage volume
• Subnets
• Firewalls
• If we install OpenShift on top of IaaS, we could have two layers of automation, in the
infrastructure level, and application level
• Automated deployment offers the flexibility that is required to easily scale up application
• With just IaaS, it’s difficult to have an automated application deployment. Only at the Infra level
Understanding the OpenShift Layer
• OpenShift allows developers to define an application in a simple
YAML file that will fetch the source code from a GitHub repository
• OpenShift on IaaS allows developers to focus on the application,
while ignoring the required underlying infrastructure
• Ansible can be used for full integration and automation: Ansible is the
solution for automation of everything
Hybrid (IaaS+PaaS) Cloud Environment
Node Node Node
Control Compute Compute
OpenStack
VM1 VM2 VMn
C1 C2 C3 C4
VMn+1
Worker
Master
OpenShift
Worker Worker
Lesson 2: Installing OpenShift
• Understanding OpenShift Versions
• Installing Minishift
• Using oc cluster up
Understanding OpenShift Versions
OpenShift Installation Options
• Red Hat OpenShift
• Licensed version of OpenShift, used by companies and enterprise
• Can be installed as an on-premise cluster
• Can be also installed in Public or Private Cloud
• OKD
• Community Supported
• Minishift (POC only)
• Nice way to get to learn OpenShift
• Only require 4GB of RAM
• OKD in a container: oc cluster up
• OKD in public or private cloud
• Install as an on-premise cluster
Installing MiniShift
MiniShift Installation Options
• MiniShift is available for different operating systems
• You will need a hypervisor
• MacOS: xhyve
• Linux: KVM
• Windows: Hyper-V
• Cross Platform: VirtualBox
• Basically it’s a VM
• Demo
Managing Minishift Addons
• Minishift, by default, has a couple of restrictions which make it so
certain security settings won’t work
• To make MiniShift more relaxed, you’ll need to enable some addons:
• minishift addon list - shows current addons
• minishift addon enable admin-user – creates a user with cluster admin
permissions
• minishift addon enable anyuid – allows you to login using any UID
• It makes more sense to use admin user in Minishift, since you will
need admin user for doing infrastructure related tasks, and it’s most
probably will be a single user environment
Installing the OpenShift Client
• The oc client is used on all types of installations
• Download the client software from www.okd.io
• Extract and copy the oc binary to /usr/local/bin or add to
environment variable
• After extracting, type oc or oc status to verify the command
availability
Add minishift & oc to environment variable
• Start > “Edit the system environment variable”
• Environment Variables…
Try some commands
• minishift addons list
• oc status
• oc whoami
• oc login –u developer –p anything
Using oc cluster up
Understanding oc cluster up
• Running a couple of containers directly on top of docker
• Requirement: Docker CE and OpenShift client
• oc cluster up method uses Docker engine and the OpenShift client
utility to spin up a proof-of-concept cluster
• Use it as an alternative for Minishift
Using oc cluster up
• Always check the current version of the documentation
• Install docker-ce
• Edit file: /etc/docker/daemon.json
{
“insecure-registries”: [“172.30.0.0/16”]
}
• This is to allow running Docker registry in a private network
• systemctl daemon-reload; systemctl restart docker
• Disable the firewall
• Docker run nginx to create local config to start a random container
• Type sudo oc cluster up, takes about 10-15 minutes
• Check using docker ps
• Shutdown: oc cluster down
Lesson 3: Getting Started With OpenShift
• Getting Started with the Web Console
• Understanding Resource Types: Pods & Namespaces
• Understanding Resource Types: Deployment Configs & Networking
• Managing Resources from the Command Line
• Using Source-to-Image to Create Application
• Basic OpenShift Troubleshooting
Getting Started with the Web Console
Understanding Projects - 1
• OpenShift is oriented around the project
• An isolated environment
• Different items exist within project
• Applications: the container that is providing services
• Builds: the process that defines how to build the container from repo
• Resources: additional optional configuration
• Storage: persistent storage that can be used by the applications
• Tip: OpenShift cheat sheet
• https://is.gd/openshift_cheatsheet
Understanding Projects – 2
• In OpenShift, you would deploy applications (microservices). Each application consists of
different projects, where a project is a part of the application stack
• Projects: a project is a Kubernetes namespace that contains all services running in the
OpenShift application and works as a strictly separated environment
• Useful in multi-tenant deployment; where customer A and customer B can have a completely
separated environment
• Namespace are implemented by the Linux kernel; it separates the network, filesystem, etc
• Specific users may have access to specific projects only
• Type oc config get-contexts to see all current projects (all users) and oc projects to see
your current projects (your account)
• After logging in, you’ll see which projects you have access to
• Use oc project myproject to switch to a different project
• Resources will always be specific to a project
• If you run an application in a project, it will not be visible in another project
Demo: Creating an Application
• From Catalog, select PHP, version 7.1
• Provide a name to the application
• Specify the git repository to use
• https://github.com/WordPress/wordpress.git
• Click Create to launch, next close that window
• Now get to Overview, where you can see the application is being built. Click it to see
details
• Now, select Builds where you can see the actual application
• Further click on the application details to explore what it is doing
• At the end of the build, an image is created and pushed to the OpenShift container
registry
• Check success in the Events log
• Check routes, it contains the DNS name to get to the application
Understanding Resource Types: Pods and
Namespaces
• OpenShift runs containers
• But OpenShift doesn’t manage containers, it manages pods
• It uses Deployment Config to manage pods
Understanding Resource Types
• The result of your efforts in OpenShift, is a microservice – also referred as an app
• The app is created in an OpenShift Project, which corresponds to a Kubernetes
namespace – an isolated environment implemented by the Linux kernel
• An app consists of different resources – like a building block
• The resource types are specified in the OpenShift API
• OpenShift API defines the resources types, if the API is updated, new resource will be
available
• As OpenShift is built on top of Kubernetes, most resource types from the
Kubernetes API are also supported
• There are two options to create an app (and all required resources)
• Use oc new-app
• Create a manifest file in YAML to identify all the different resources
Understanding Namespaces
• Namespace is an important part of OpenShift, from architect point of view
• A Kubernetes namespace is a group of isolated resources that behaves as a cluster, in OpenShift
we call this a project
• Namespaces implement isolation at the Linux kernel level and are available at the different levels
• mount -> filesystem; only present specific one specific area of the filesystem
• PID -> process table, each container only can see its own PID table only, you cannot see what’s happening in
another namespace
• network -> makes every namespace an isolated network, each namespace can only be communicated
through routing
• IPC -> inter-process communication is limited only to processes within the namespaces, not possible to make
communication to outside of the namespace
• User ID -> you can have user with the same id and name in a different namespace, as if they are in different
computer
• Cgroup -> Linux feature that allows to do resource allocation, to make sure that every container has dedicated
RAM, CPU cycles, and so on
• Because of using namespaces, strictly isolated environment can be implemented
Understanding Pods
• An application is defined in an image
• Analogy: it’s like an ISO file, installer
• A container is a run-time instance of an image
• A Pod is a solution to run groups of containers
• Using Pods allows you to group multiple applications
• Usually we will only have one container in a Pod, as it is the Microservices best
practices
• Containers in a pod have an isolated pid namespace and filesystem
namespace, but share the same network namespace, volumes, and
hostname.
• Containers in the Pod will always run on the same host
• It’s not possible to spread out containers if they are in the same pod
Demo
• oc whoami
• oc get pods
• Get information
• -build is revealing information of build process, getting source from the repo
etc.
• oc get all
• We did not create pod, we created application
• It lists all the components / resources when we created the application
• The most important here is the deploymentconfig; it is what will be used to
run the different pods
Create a yaml file to create pod – helloworld.yaml
apiVersion: v1
kind: Pod
metadata:
name: examplepod
spec:
containers:
- name: ubuntu
image: ubuntu:latest
command: [“echo”]
args: [“hello world”]
Understanding Resource Types: Deployment
Configs and Networking
Understanding Deployment Config
• To run Pods, you’ll start a Deployment Config, as these add useful
features to the Pods
• From user’s perspective, we’re creating a new app, and creating a new app is
creating a new deployment config
• One of the feature: Replication Controller, which takes care of the replication
of pods, and is a part of the Deployment
• Update Strategy is also a part of the deployment
• Rolling update: maintains the desired amount of pods
• Recreate: stops all Pods and deploys new Pods
• Custom: allows you to run any command in the deployment
• Triggers define when a new deployment should be created
Understanding Deployment Triggers
• When critical components change, you would like a new deployment
to be generated automatically
• Use oc describe on a deployment and look for triggers to figure out
the default triggers
• ConfigChange: triggers a new deployment on configuration change
• Image: triggers a new deployment when a new image is available
• Manual triggers can be issued, using oc deploy myapp –latest
Understanding Replication Controllers
• The Replication Controller (RC) is a part of the Deployment Config
• RC uses labels and selectors to track availability of Pods
• Every pod by default has a label
• Manual labels can be set as well
• The RC uses a selector to specify which labels should be used
• Use oc get pods –show-labels to show the labels that the OS has
automatically added
• Use oc describe rc <name> to see the current selector that is used
Understanding Services and Route
• If you look at the overview tab in OpenShift, you can see available
applications, including the URL you need to access the application
• On replicated application, there’s a load balancer behind to decide
which pod to connect to
• The service takes care of load balancing, and gives one identity
• The route is what gives a published URL, and what allows access to
the application from outside the cluster
• Route on K8s is based on the ingress controller but needs additional
configuration
Demo
• oc get dc
• Demo app; in web it’s called app
• Triggered by, deployment will be triggered by these values
• oc get rc
• Information about the replication controller
• How many replica are there?
• oc get pods --show-labels
• The labels are shown here, app=demo-app, etc
• It connects the pods to the deploymentconfig
• oc describe rc demo-app-1
• We can see the complete configuration
• Name, namespace, selector, labels, replica, strategy, status, containers, image etc
Demo: Managing Resources from the
Command Line
• oc login –u developer –p anything
• oc new-project firstproject
• oc new-app --docker-image=nginx:1.14 –name=nginx
• oc status (use repeatedly to trace the process)
• oc get pods
• oc describe pod <podname>
• oc get svc
• oc describe service nginx
• oc port-forward <podname> 33080:80
• curl –s http://localhost:33080
Demo: Creating another App
• oc whoami
• oc new-project mysql
• oc new-app --docker-image=mysql:latest --name=mysql-openshift –e
MYSQL_USER=myuser –e MYSQL_PASSWORD=password –e
MYSQL_DATABASE=mydb –e MYSQL_ROOT_PASSWORD=password
• oc status –v
• oc get all
• oc get pods –o=wide
• Login to the webconsole and see the new app in different project
Using Source-to-Image to Create Applications
• An important part of OpenShift that allows developers to
automatically build container based on source code on a git repo
Understanding S2I
• To create Images automatically, a Dockerfile could be used
• Source 2 Image (S2I) takes application source code from a source
control repository (such as Git) and builds a base container based on
that to run the application
• While doing so, the image is pushed to the OpenShift registry
• Using S2I allows developers to build running containers without the
need to know anything about the specific OS platform
• S2I also makes it easy to patch: after updating the application code a
new image is generated
• This process is handled as a rolling upgrade
Image and Image Streams
• OpenShift works with Image Streams
• An Image Stream is a consolidated view on related images
• An image is a runtime template that contains all data that is needed to run
a container
• This includes metadata that describes image needs and capabilities
• Images in an image stream are identified by a tag, and can be specified as
such
• image=nginx:1.8
• Two types of images exist
• Builder images are used in the S2I process to build applications
• The result is a runtime image that is used to start an application
• Like an ISO file that used to spin up the application
Exploring Builder Images
• Default Builder Images are available in OpenShift
• Check the Catalog in the browser interface
• PHP, etc
• Or use oc get is –n openshift for an overview
• Alternatively, builder images can be created by the administrator
Understanding the S2I flow
• To build an image based on source code, base image is required, this image
is known as the builder image and is used as a runtime environment
• Base builder image such as Python and Ruby are included
• Builder Images are available in the catalog that you see in the web interface
• When either the application source code or the builder image gets
updated, a new container image can be created
• Applications need to be updated after a change of either the application
code, or the builder image itself
• Applications are built against image streams, which are resources that
name specific container images with image stream tags
• The base S2I images may be obtained from a trusted repository, or can be
self-built
Building an Application - 1
• The oc new-app command is used to build the application from a Git repository
• Use oc new-app php~http://github.com/sandervanvugt/simpleapp --name=myapp to build the
application from the git repository
• In this command, the php part in front of the URL indicates the image stream that is to be used
• If no image stream is given, the oc new-app command tries to detect which image stream is used based
on the presence of some files
• Use oc-o yaml new-app php~ http://github.com/sandervanvugt/simpleapp --name=myapp >
s2i.yaml to automatically generate a YAML definition file that contains all resources to be created
• The app itself is NOT created
Building an Application - 2
• After creating the new application, the build process starts. Type oc
get builds for an overview
• A buildconfig can be used to trigger a new build
• The BuildConfig pod is responsible from creating images in OpenShift and
pushing them to internal Docker registry
Explore New App YAML file
• Kind: Imagestream
• Kind: BuildConfig
• Source: describe where the source is coming from
• Strategy: Defining how we want to build the source
• Kind: DeploymentConfig
• Labels that we have set
• Number of replicas
• Containers that has been built in previous step
• :latest as the latest image
Demo: Building an Application
• oc logs –f bc/simple-app to track the progress
• oc status – simpleapp is now deployed
• oc get all
• Now we have the pod, replicationcontroller, service, deploymentconfig, and
buildconfig
• oc get builds
• Info about the build that we just done
• oc describe builds simple-app-1 (name of the build from prev
command)
Basic OpenShift Troubleshooting
• oc get events will show recent events
• oc logs <podname> will show what has happened on a specific pod
• oc describe pod <podname> will show all pod details
• oc projects will show all projects, you might be in the wrong project!
• oc delete all –l app=simpleapp will delete everything using that label
• When we create an app we also create a Pod, DeploymentConfig,
ReplicationController, BuildConfig, etc. It’s better we delete all of them based
on the label
• oc delete all –all
• Delete everything in the current project
Part 2 – Managing & Deploying OpenShift
Lesson 4 – Managing OpenShift Networking
• Understanding Software Defined Networking
• Understanding OpenShift SDN
• Understanding Services
• Understanding Routes
• Creating Routes
Understanding Software Defined Networking
Node1 Node2 Node3
Routed
ins1 ins2 ins3
SDN
Underlay
Overlay
Direct
Understanding OpenShift SDN
• On Docker, containers connect to host-only virtual bridge
• Communication with containers on other hosts goes through port mapping
• Container ports are bound to ports on the host
• OpenShift SDN decouples the control plane from the data plane and thus
implements SDN
• SDN is implemented with plugins
• A plug-in adds knowledge about specific networking to the infrastructure
• The cluster network is created using Open vSwitch
• Master nodes do not have access to containers, unless this was specifically enabled
• This is a security feature
Understanding OpenShift SDN Plug-ins
• ovs-subnet: provides a flat pod network where every pod can communicate
with every other pod and service
• It is an Open vSwitch plugin, hence ovs
• ovs-multitenant: isolating networking to project.
• Each project get its own Virtual Network ID
• Pods can only communicate with Pods that share this VNID
• Pods with VNID 0 can communicate with all other pods and vice-versa
• Usually for management / administrative pods
• The default project (all the management containers for OpenShift) has a VNID of 0
• ovs-networkpolicy: allows administrators to define their own policies
• To do so, NetworkPolicy objects are used
Understanding Pod Networking
• Each pod has its own unique IP Address
• Containers within a pod behave as if they are all on the same host
• As mentioned previously, each pod usually only has one container
• As a result, pods are treated like physical or virtual machines
• To access Pods, services are used
Understanding Pod Networking
Pod1
IP Addr
C1 C2 C3
Each containers can only be
accessed through ports, as it has
only 1 IP address from the
outside
Understanding Services
• Services implement round-robin load balancing to access pods
• We can have multiple pods that is similarly presented to the end user; let’s say we have
replicas
• We need to load balance them
• The service has a stable IP address and allows communication with pods for
external clients
• Services also allow replicated pods to communicate to one another
• Services use a selector attribute to connect to Pods
• Each pod matching the selector is added to the service resource as an endpoint
• Pod as well as service IP addresses cannot be reached from outside the cluster
(pod uses a private IP)
• We will use a router instead, to be able to access the pods externally
Understanding Services
- apiVersion: v1
kind: Service
metadata:
labels:
app: my-app
name: my-app
spec:
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
nodePort: 38080
selector:
app: my-app
deploymentconfig: my-app
type: NodePort
Selector for choosing which app is
going to be managed by this service
Exposed port
Getting Traffic in and out of the Cluster
Three methods exist for clients that need access to the OpenShift service
• HostPort/HostNetwork: clients can reach the Pod directly by using forwarded
ports. Ports in the pod are bound to pods on the host where they are running.
Escalated privileges are required to use this method
• Not flexible, as it requires privilege escalation, thus not very common
• NodePort: the service is exposed by binding to available ports on the node host.
The node host proxies connections to the service IP address
• NodePort supports any traffic type
• Nodeports are in the range of 30000-32767 by default. This can be changed
• If not specified, a random nodePort is assigned by OpenShift.
• One usually specifies the port in the default range, as shown in previous YAML example
• OpenShift routes: services are exposed using a unique URL
• Routes support HTTP, HTTPS, TLS with SNI and WebSockets only
• Web based protocol, like a reverse proxy
How They All Interconnect To Each Other
P1 P2 P3 P4
S: 8080 S: 8080
RR - LB
Nodeport Nodeport
VIP: 1.2.3.4
Route
External
(DNS)
Understanding Routes
• OpenShift routes allow network access to pods from outside the OpenShift
environment
• If you want your app to be accessed by external users, you will need route
• A dedicated router pod is used to load-balance traffic between the target
Pods
• The router pod uses HAProxy and can be scaled itself
• The router pod queries the Etc database on the OpenShift master to get
information about the Pods
• The router exposes a public-facing IP address and DNS hostname to the
internal Pod networking
• Routers connect directly to the Pods; the service is used for Pod lookup
only but not involved in the actual traffic flow
Router YAML code
- apiVersion: v1
kind: Route
metadata:
creationTimestamp: null
labels:
app: my-app
name: my-app
spec:
host: externaldnsname.apps.example.com
port:
targetPort: 8080-tcp
to:
kind: Service
name: my-app
Routers – Behind The Scene
• oc whoami
• Need to be system:admin
• oc projects
• oc get all –n default
• pod/router-xxxx
• oc describe pod/router-xxxxx –n default
Creating Routes
• oc expose service my-app –name my-app [--hostname=my-
app.apps.example.com] to create a route on top of an existing service
• Specify DNS name only if this name can be resolved to a wildcard DNS domain
name
• If a DNS name is not specified, a name will be automatically generated
• Alternatively, use oc create combined with a YAML or JSON file
• Note that oc new-app does NOT create a route
• Because you don’t want your newly deployed application automatically
exposed, for security reason
• Use oc delete route to un-expose a service
Managing Router Properties
• The default routing subdomain is set in the master-config.yaml
OpenShift configuration file
routingConfig:
subdomain: apps.example.com
• Notice that the router must be able to bind to port 80 and 443, do
NOT run a router on a host that already uses these ports for
something else
Understanding Router Types
• Secure routers can use several types of TLS termination
• Edge Termination: TLS is terminated at the router, and traffic from router to
Pods is not encrypted
• Pass-through Termination: the router sends TLS traffic straight through to the
Pod and the Pod is responsible for serving certificates
• Re-encryption Termination: the router terminates the TLS traffic and re-
encrypts traffic to the endpoint
• Unsecure routers don’t do TLS termination, so it is easier to setup
Try To Create Routes
• oc whoami
• As developer
• oc get all
• Find out what pods and service do we have
• oc expose [servicename]
• oc expose svc/httpd
• oc expose httpd –name httpd
• oc get all
• Now it’s there
• oc describe route [routername]
• oc describe httpd
• Pay attention to Requested Host:
• Endpoints: -> how we get to the Pod
Lesson 5: Deploying Applications
• Scaling Applications
• Scheduling Pods
• Managing Images and Image Streams
• Managing Templates
Understanding Application Scaling
• Application Scaling is handled by the replication controller
• The replication controller ensures that the number of pods that is
specified in the replica count is running at all times
• To do so, the replication controller monitors the pods by using tags as
the selector
• This selector is a set of labels that exists in the Pod as well as in the
Replication Controller
• Replication Controllers can be managed directly, but it’s
recommended to manage them through Deployment Configs
Scaling Applications
The number of replicas can be scaled manually or automatically using
Autoscale
• Manual Scaling
• oc get dc
• oc scale –replicas=5 dc simpleapp
• Autoscaling
• The HorizontalPodAutoscaler resource type is used to automatically scale
based on current load on application pods
Understanding Autoscaling
• HorizontalPodAutoscaler used performance metrics that are collected
by the OpenShift Metrics subsystem
• If this system is in place, use autoscale dc/myapp --min 1 --max 10 --
cpu-percent=80 to automatically scale
• This command creates a HorizontalPodAutoscaler object that changes
the number of replicas such that the pods are kept below 80% of CPU
usage
Manual scaling
• oc –o yaml sample-app
php~https://github.com/sandervanvugt/simpleapp –
name=simpleapp > s2i.yaml
• Open the yaml file
• Goto: DeployMentConfig
• replicas: 1
• Standard replication
• Deploy the app
• oc get dc
• Now we can see the replicas
Scheduling Pods
Understanding Pod Scheduling
• Pods by default are distributed between the nodes in a cluster
• The scheduling process can be manipulated, using different items
• Zones and Regions
• Node labels
• Affinity rules and anti-affinity rules
• All nodes, including the master can run Pods
• You should only run the web console Pod on the master
• Use the Ansible variable osm_default_node_selector to enable/disable
running pods on the master
• This is configured during installation of OpenShift cluster
Understanding the Pod Scheduler Algorithm
• Pod scheduling is a 3-step process
• Filter nodes
• The scheduler filters nodes according to node resources that are required by pods
• Maybe some pods require something like, an SSD storage
• Node selectors can be used in this process
• Pods can also request access to specific resources
• Prioritize the filtered list of nodes
• Affinity rules: used to ensure that Pods that belong together run close to each other
• Anti-affinity rules: ensures that Pods will not run close to each other
• Select the best fit node
• The algorithm applies to score to each node
• The node with the highest score will run the pod
Understanding Topology
• Topology can be applied to make scheduling easier in large datacenters
• In Topology, there is a region, zone.
• A region is a set of hosts with a guaranteed high-speed connection
between them, typically in the same geographical area
• A zone is a set of hosts that share the same infrastructure components
(network, storage, power), and for that reason might fail together
• Resources that runs in the same rack in a DC
• OpenShift can use region and zone labels in pods
• Replica pods are scheduled on nodes in the same zone by default
• Replica pods are scheduled on nodes with a different zone label
Setting Topology Labels
• By default, nodes get the region=infra label
• Administrators can use the oc label command to set labels on nodes
• oc label node node1.example.com region=eu-west zone=rack1 –overwrite
• oc label node node2.example.com region=eu-west zone=rack2 –overwrite
• To show nodes and their labels, use oc get node node1.example.com
–show-labels
Taking Down a Node
Sometimes you need to take down a node
• To take down a node, OpenShift has a two-step process
• First, mark the node as unschedulable: oc adm manage-node --
schedulable=false node1.example.com
• Next, drain the node. This will destroy all pods on the running node
so that they are created somewhere else: oc adm drain
node1.example.com
• Once finished, use oc adm manage-node --schedulable=true
node1.example.com
Using Node Selectors
• Node labels and node selectors can be used to ensure a Pod is
scheduled on a specific node
• Node selectors are a label that is set on the node
• To set a node selector, change the pod definition using oc edit or oc
patch
• oc path cd myapp --patch ‘{“spec”:{“nodeSelector”:{“env”:”qa”}}}}’
Understanding the Default Project
• Upon installation, the default is created
• In bigger clusters, it’s a good idea to use this project to run
infrastructure pods such as the router and internal registry
• To do this, label dedicated with the region=label
• Next, use oc annotate to add this label to the namespace, using a
node selector: oc annotate –overwrite namespace default
openshift.io/node-selector=‘region=infra’
• This will make sure that the default will be serviced on specific nodes only
Managing Images and Image Streams
Understanding Images
• An image is a deployable runtime template that includes all that is needed
to run a container
• In OpenShift, a single image can refer to different versions of the same
image. Docker does not use version numbers, but tags to refer to specific
versions of an image
• An image stream comprises a number of container images identified by
tags
• It is a consolidated view of related images
• In OpenShift, deployments and builds can receive notifications when new
images are added, and as a result trigger a new build or deployment to be
started
Getting Images
• OpenShift has many ways to get an image
• Use default images from the image repositories
• Use S2I to build images based on source code
• Use Dockerfile to build your own image and store it in the internal
registry
• Use buildah to build custom images
Understanding Tags
• Tags are used to identify what it is that an image contains
• Tags should be set and used in a way that they are updated if a new version
is available
• myimage:v2.0.1 is a good tag
• myimage:v.2.0.1-nov20 is not a good idea
• For example, a developer that has an Apache image, can tag it with the
Apache version that is in the image, as apache:2.4
• oc tag command is used for tagging images
• oc tag nginx:latest nginx:1.12 would make that the “latest” tag always
refers to version 1.12
• So users will always use the latest software version
Understanding Templates
• A template is a ready-to-use file that allows you to create multiple
related objects in OpenShift in an easy way
• Templates contain not just the objects, but also the parameters that
you want to be edited
• Templates can be used to create any object
• Administrators can write their own templates in YAML or Json, or
instant app and quickstart templates can be used
Instant App and QuickStart Templates
• OpenShift comes with some default instant app and quickstart
templates
• These make creating applications for different languages easier
• Use the Catalog in the web interface to get started with a specific
template
• Or use oc get templates -n openshift to show templates
• oc process --parameters mysql-persistent –n openshift will show
parameters supported by a template
• oc process -o yaml -n openshift mysql-persistent shows a generated
template where all parameters have obtained a default value
Creating Custom Templates
• To easy creation of objects, you can create your own custom
templates
• To create an app, use oc new-app –templates=your-template
• It’s a good idea to set default parameters in the template, but you can
overwrite these parameters as well: oc new-app –template=your-
template -p WEB_SERVER=httpd
Demo
• oc get templates
• oc get templates –n openshift
• oc process --parameters mysql-persistent --n openshift
• oc process -o yaml -n openshift mysql-persistent
• Kind: Secret
• Contains password, username etc
• DeploymentConfig
• Replicas, name, containers with environment variables
kind: Template
apiVersion: v1
metadata:
name: demo-template
labels:
role: web
message: Deploying ${WEB_SERVER}
objects:
- kind: Pod
apiVersion: v1
metadata:
name: tdemo-pod
spec:
containers:
- name: ${WEB_SERVER}
image: ${WEB_SERVER}
- kind: Service
apiVersion: v1
metadata:
name: tdemo-svc
spec:
ports:
- port: 80
selector:
role: web
- kind: Route
apiVersion: v1
metadata:
name: tdemo-route
spec:
to:
kind: Service
name: tdemo-svc
parameters:
- name: WEB_SERVER
displayName: Web Server
description: Web server image to use
value: nginx
Try the previous YAML
oc new-app --template=demo-template
Cleanup
oc delete all -l role=web
Try the previous YAML with Environment
Variable
oc new-app --template=demo-template -p WEB_SERVER=httpd
Managing OpenShift Storage
• Understanding OpenShift Storage
• Configuring OpenShift Storage Access
• Setting Up NFS Persistent Storage
• Working With ConfigMaps
Understanding OpenShift Storage
• By default, container storage is ephemeral (temporary)
• OpenShift uses Kubernetes persistent volume to provide storage for pods
• In persistent storage, data is stored external to the Pod, so if the containers
shut down, the data is still available
• Persistent storage is typically some kind of networked storage provided by
the OpenShift administrator
• Persistent volumes are objects that exist independent of any Pod
• Developers create a persistent volume claim (PVC) that requires access to
persistent storage without the need to know anything about the underlying
infrastructure
Supported Persistent Storage
• NFS
• GlusterFS
• OpenStack Cinder
• Ceph RBD
• AWS Elastic Block Store
• GCE Persistent Disk
• Azure Disk and Azure File
• VMware vSphere
• iSCSI
• Fibre Channel
• EmptyDir
• and others
Persistent Volume Access Modes
• The access modes depends how nodes can access the storage
• ReadWriteOnce: a single node has read/write access (only 1 node)
• ReadWriteMany: multiple nodes can mount the volume in read/write mode
• ReadOnlyMany: the volume can be mounted read-only by many nodes
Determining Storage Access
• The storage access type in a PVC is matched to volumes offering
similar access modes
• If a developer define RWO in the PV Claim, then it will find a persistent
volume that matches the same RWO configuration
• Optionally, the PVC may request a specific storage class, using the
storageClassName attribute. In that case, the PVC is matched to PV’s that
have the same storageClassName set
• Force the pod to use some kind of storage
• The PVC is not connected to any specific PV in any way
• The Pod itself has a connection to the PersistentVolumeClaim, NOT to
the Persistent Voume
Configuring OpenShift Storage Access
Creating PVs and PVC resources
• Objects need to be created in the right order
• First, the PersistentVolumes need to be created
• Next, the PersistentVoumeClaims are created
• Finally, the Pods are configured to use a specific PVC
Using NFS for Persistent Volumes
• Mapping between containers and UIDs of an NFS Server doesn’t work
as container UIDs are randomly generated
• To use NFS share as an OpenShift PV, it must match the following
requirements
• Owned by nfsnobody user and group
• Permission mode set to 700
• Exported using all_squash option
• Consider using async export option for faster handling of storage requests
Set Up NFS Storage
• yum install –y nfs-server
• mkdir /storage
• chown nfsnobody.nfsnobody /storage
• chmod 700 /storage
• echo “/storage *(rw,async,all_squash)” >> /etc/exports
• systemctl enable --now nfs-server
Create PV
• nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /storage
server: 172.17.0.1
readOnly: false
Add the PV to OpenShift
• Oc login –u system:admin –p anything
• Oc create –f nfs-pv.yaml
• Oc get pv | grep nfs
• Oc describe pv nfs-pv
Create PVC
• nfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-pv-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
Adding a PVC
• oc create –f nfs-pvc.yml
• oc describe pvc nfs-pvc
• oc get pv
• Look for the “bound” state
Create Pod with PVC
kind: Pod
apiVersion: v1
metadata:
name: nfs-pv-pod
spec:
volumes:
- name: nfs-pv
persistentVolumeClaim:
claimName: nfs-pv-claim
containers:
- name: nfs-client1
image: toccoag/openshift-nginx
ports:
- containerPort: 8081
name: "http-server1"
volumeMounts:
- mountPath: "/nfsshare"
name: nfs-pv
resources: {}
- name: nfs-client2
image: toccoag/openshift-nginx
ports:
- containerPort: 8082
name: "http-server2"
volumeMounts:
- mountPath: "/nfsshare"
name: nfs-pv
resources: {}

Weitere ähnliche Inhalte

Was ist angesagt?

OpenShift-Technical-Overview.pdf
OpenShift-Technical-Overview.pdfOpenShift-Technical-Overview.pdf
OpenShift-Technical-Overview.pdfJuanSalinas593459
 
Gitops: the kubernetes way
Gitops: the kubernetes wayGitops: the kubernetes way
Gitops: the kubernetes waysparkfabrik
 
Containers Anywhere with OpenShift by Red Hat
Containers Anywhere with OpenShift by Red HatContainers Anywhere with OpenShift by Red Hat
Containers Anywhere with OpenShift by Red HatAmazon Web Services
 
Docker introduction
Docker introductionDocker introduction
Docker introductiondotCloud
 
Docker introduction
Docker introductionDocker introduction
Docker introductionPhuc Nguyen
 
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-RegionJi-Woong Choi
 
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교  및 구축 방법[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교  및 구축 방법
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법Open Source Consulting
 
Hands-On Introduction to Kubernetes at LISA17
Hands-On Introduction to Kubernetes at LISA17Hands-On Introduction to Kubernetes at LISA17
Hands-On Introduction to Kubernetes at LISA17Ryan Jarvinen
 
Kubernetes 101 - an Introduction to Containers, Kubernetes, and OpenShift
Kubernetes 101 - an Introduction to Containers, Kubernetes, and OpenShiftKubernetes 101 - an Introduction to Containers, Kubernetes, and OpenShift
Kubernetes 101 - an Introduction to Containers, Kubernetes, and OpenShiftDevOps.com
 
Terraform -- Infrastructure as Code
Terraform -- Infrastructure as CodeTerraform -- Infrastructure as Code
Terraform -- Infrastructure as CodeMartin Schütte
 
Kubernetes Architecture and Introduction
Kubernetes Architecture and IntroductionKubernetes Architecture and Introduction
Kubernetes Architecture and IntroductionStefan Schimanski
 
DevOpsDays Taipei 2019 - Mastering IaC the DevOps Way
DevOpsDays Taipei 2019 - Mastering IaC the DevOps WayDevOpsDays Taipei 2019 - Mastering IaC the DevOps Way
DevOpsDays Taipei 2019 - Mastering IaC the DevOps Waysmalltown
 
Openshift Container Platform
Openshift Container PlatformOpenshift Container Platform
Openshift Container PlatformDLT Solutions
 
CI:CD in Lightspeed with kubernetes and argo cd
CI:CD in Lightspeed with kubernetes and argo cdCI:CD in Lightspeed with kubernetes and argo cd
CI:CD in Lightspeed with kubernetes and argo cdBilly Yuen
 
A brief study on Kubernetes and its components
A brief study on Kubernetes and its componentsA brief study on Kubernetes and its components
A brief study on Kubernetes and its componentsRamit Surana
 
An introduction to terraform
An introduction to terraformAn introduction to terraform
An introduction to terraformJulien Pivotto
 
Kubernetes Monitoring & Best Practices
Kubernetes Monitoring & Best PracticesKubernetes Monitoring & Best Practices
Kubernetes Monitoring & Best PracticesAjeet Singh Raina
 
KVM High Availability Regardless of Storage - Gabriel Brascher, VP of Apache ...
KVM High Availability Regardless of Storage - Gabriel Brascher, VP of Apache ...KVM High Availability Regardless of Storage - Gabriel Brascher, VP of Apache ...
KVM High Availability Regardless of Storage - Gabriel Brascher, VP of Apache ...ShapeBlue
 
Kubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory GuideKubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory GuideBytemark
 

Was ist angesagt? (20)

OpenShift-Technical-Overview.pdf
OpenShift-Technical-Overview.pdfOpenShift-Technical-Overview.pdf
OpenShift-Technical-Overview.pdf
 
Gitops: the kubernetes way
Gitops: the kubernetes wayGitops: the kubernetes way
Gitops: the kubernetes way
 
Containers Anywhere with OpenShift by Red Hat
Containers Anywhere with OpenShift by Red HatContainers Anywhere with OpenShift by Red Hat
Containers Anywhere with OpenShift by Red Hat
 
Docker introduction
Docker introductionDocker introduction
Docker introduction
 
Docker introduction
Docker introductionDocker introduction
Docker introduction
 
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
[오픈소스컨설팅] Open Stack Ceph, Neutron, HA, Multi-Region
 
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교  및 구축 방법[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교  및 구축 방법
[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법
 
Hands-On Introduction to Kubernetes at LISA17
Hands-On Introduction to Kubernetes at LISA17Hands-On Introduction to Kubernetes at LISA17
Hands-On Introduction to Kubernetes at LISA17
 
Kubernetes 101 - an Introduction to Containers, Kubernetes, and OpenShift
Kubernetes 101 - an Introduction to Containers, Kubernetes, and OpenShiftKubernetes 101 - an Introduction to Containers, Kubernetes, and OpenShift
Kubernetes 101 - an Introduction to Containers, Kubernetes, and OpenShift
 
Terraform -- Infrastructure as Code
Terraform -- Infrastructure as CodeTerraform -- Infrastructure as Code
Terraform -- Infrastructure as Code
 
Kubernetes Architecture and Introduction
Kubernetes Architecture and IntroductionKubernetes Architecture and Introduction
Kubernetes Architecture and Introduction
 
DevOpsDays Taipei 2019 - Mastering IaC the DevOps Way
DevOpsDays Taipei 2019 - Mastering IaC the DevOps WayDevOpsDays Taipei 2019 - Mastering IaC the DevOps Way
DevOpsDays Taipei 2019 - Mastering IaC the DevOps Way
 
Openshift Container Platform
Openshift Container PlatformOpenshift Container Platform
Openshift Container Platform
 
CI:CD in Lightspeed with kubernetes and argo cd
CI:CD in Lightspeed with kubernetes and argo cdCI:CD in Lightspeed with kubernetes and argo cd
CI:CD in Lightspeed with kubernetes and argo cd
 
A brief study on Kubernetes and its components
A brief study on Kubernetes and its componentsA brief study on Kubernetes and its components
A brief study on Kubernetes and its components
 
An introduction to terraform
An introduction to terraformAn introduction to terraform
An introduction to terraform
 
Azure DevOps
Azure DevOpsAzure DevOps
Azure DevOps
 
Kubernetes Monitoring & Best Practices
Kubernetes Monitoring & Best PracticesKubernetes Monitoring & Best Practices
Kubernetes Monitoring & Best Practices
 
KVM High Availability Regardless of Storage - Gabriel Brascher, VP of Apache ...
KVM High Availability Regardless of Storage - Gabriel Brascher, VP of Apache ...KVM High Availability Regardless of Storage - Gabriel Brascher, VP of Apache ...
KVM High Availability Regardless of Storage - Gabriel Brascher, VP of Apache ...
 
Kubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory GuideKubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory Guide
 

Ähnlich wie Red Hat Openshift Fundamentals.pptx

Containers, microservices and serverless for realists
Containers, microservices and serverless for realistsContainers, microservices and serverless for realists
Containers, microservices and serverless for realistsKarthik Gaekwad
 
DEVNET-1183 OpenShift + Kubernetes + Docker
DEVNET-1183	OpenShift + Kubernetes + DockerDEVNET-1183	OpenShift + Kubernetes + Docker
DEVNET-1183 OpenShift + Kubernetes + DockerCisco DevNet
 
APPLICATIONS AND CONTAINERS AT SCALE: OpenShift + Kubernetes + Docker
APPLICATIONS AND CONTAINERS AT SCALE: OpenShift + Kubernetes + DockerAPPLICATIONS AND CONTAINERS AT SCALE: OpenShift + Kubernetes + Docker
APPLICATIONS AND CONTAINERS AT SCALE: OpenShift + Kubernetes + DockerSteven Pousty
 
Containers and Microservices for Realists
Containers and Microservices for RealistsContainers and Microservices for Realists
Containers and Microservices for RealistsOracle Developers
 
Containers and microservices for realists
Containers and microservices for realistsContainers and microservices for realists
Containers and microservices for realistsKarthik Gaekwad
 
Why kubernetes matters
Why kubernetes mattersWhy kubernetes matters
Why kubernetes mattersPlatform9
 
Rancher Labs - Your own PaaS in action
Rancher Labs - Your own PaaS in actionRancher Labs - Your own PaaS in action
Rancher Labs - Your own PaaS in actionOpenNebula Project
 
Power of Choice in Docker EE 2.0 - Anoop - Docker - CC18
Power of Choice in Docker EE 2.0 - Anoop - Docker - CC18Power of Choice in Docker EE 2.0 - Anoop - Docker - CC18
Power of Choice in Docker EE 2.0 - Anoop - Docker - CC18CodeOps Technologies LLP
 
The Application Server Platform of the Future - Container & Cloud Native and ...
The Application Server Platform of the Future - Container & Cloud Native and ...The Application Server Platform of the Future - Container & Cloud Native and ...
The Application Server Platform of the Future - Container & Cloud Native and ...Lucas Jellema
 
An introduction to configuring Domino for Docker
An introduction to configuring Domino for DockerAn introduction to configuring Domino for Docker
An introduction to configuring Domino for DockerGabriella Davis
 
Alibaba Cloud Conference 2016 - Docker Open Source
Alibaba Cloud Conference   2016 - Docker Open Source Alibaba Cloud Conference   2016 - Docker Open Source
Alibaba Cloud Conference 2016 - Docker Open Source John Willis
 
The Rise of the Container: The Dev/Ops Technology That Accelerates Ops/Dev
The Rise of the Container:  The Dev/Ops Technology That Accelerates Ops/DevThe Rise of the Container:  The Dev/Ops Technology That Accelerates Ops/Dev
The Rise of the Container: The Dev/Ops Technology That Accelerates Ops/DevRobert Starmer
 
Containerization with Azure
Containerization with AzureContainerization with Azure
Containerization with AzurePranav Ainavolu
 
Docker y azure container service
Docker y azure container serviceDocker y azure container service
Docker y azure container serviceFernando Mejía
 

Ähnlich wie Red Hat Openshift Fundamentals.pptx (20)

Containers, microservices and serverless for realists
Containers, microservices and serverless for realistsContainers, microservices and serverless for realists
Containers, microservices and serverless for realists
 
DEVNET-1183 OpenShift + Kubernetes + Docker
DEVNET-1183	OpenShift + Kubernetes + DockerDEVNET-1183	OpenShift + Kubernetes + Docker
DEVNET-1183 OpenShift + Kubernetes + Docker
 
APPLICATIONS AND CONTAINERS AT SCALE: OpenShift + Kubernetes + Docker
APPLICATIONS AND CONTAINERS AT SCALE: OpenShift + Kubernetes + DockerAPPLICATIONS AND CONTAINERS AT SCALE: OpenShift + Kubernetes + Docker
APPLICATIONS AND CONTAINERS AT SCALE: OpenShift + Kubernetes + Docker
 
Containers and Microservices for Realists
Containers and Microservices for RealistsContainers and Microservices for Realists
Containers and Microservices for Realists
 
Containers and microservices for realists
Containers and microservices for realistsContainers and microservices for realists
Containers and microservices for realists
 
Why to Cloud Native
Why to Cloud NativeWhy to Cloud Native
Why to Cloud Native
 
Why kubernetes matters
Why kubernetes mattersWhy kubernetes matters
Why kubernetes matters
 
Rancher Labs - Your own PaaS in action
Rancher Labs - Your own PaaS in actionRancher Labs - Your own PaaS in action
Rancher Labs - Your own PaaS in action
 
Rancher Labs - Your own PaaS in action
Rancher Labs - Your own PaaS in actionRancher Labs - Your own PaaS in action
Rancher Labs - Your own PaaS in action
 
Data harmonycloudpowerpointclientfacing
Data harmonycloudpowerpointclientfacingData harmonycloudpowerpointclientfacing
Data harmonycloudpowerpointclientfacing
 
Power of Choice in Docker EE 2.0 - Anoop - Docker - CC18
Power of Choice in Docker EE 2.0 - Anoop - Docker - CC18Power of Choice in Docker EE 2.0 - Anoop - Docker - CC18
Power of Choice in Docker EE 2.0 - Anoop - Docker - CC18
 
Containers and Docker
Containers and DockerContainers and Docker
Containers and Docker
 
The Application Server Platform of the Future - Container & Cloud Native and ...
The Application Server Platform of the Future - Container & Cloud Native and ...The Application Server Platform of the Future - Container & Cloud Native and ...
The Application Server Platform of the Future - Container & Cloud Native and ...
 
Mcroservices with docker kubernetes, goang and grpc, overview
Mcroservices with docker kubernetes, goang and grpc, overviewMcroservices with docker kubernetes, goang and grpc, overview
Mcroservices with docker kubernetes, goang and grpc, overview
 
An introduction to configuring Domino for Docker
An introduction to configuring Domino for DockerAn introduction to configuring Domino for Docker
An introduction to configuring Domino for Docker
 
Alibaba Cloud Conference 2016 - Docker Open Source
Alibaba Cloud Conference   2016 - Docker Open Source Alibaba Cloud Conference   2016 - Docker Open Source
Alibaba Cloud Conference 2016 - Docker Open Source
 
The Rise of the Container: The Dev/Ops Technology That Accelerates Ops/Dev
The Rise of the Container:  The Dev/Ops Technology That Accelerates Ops/DevThe Rise of the Container:  The Dev/Ops Technology That Accelerates Ops/Dev
The Rise of the Container: The Dev/Ops Technology That Accelerates Ops/Dev
 
Containerization with Azure
Containerization with AzureContainerization with Azure
Containerization with Azure
 
Why to docker
Why to dockerWhy to docker
Why to docker
 
Docker y azure container service
Docker y azure container serviceDocker y azure container service
Docker y azure container service
 

Kürzlich hochgeladen

KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfJiananWang21
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdfKamal Acharya
 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdfSuman Jyoti
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfKamal Acharya
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Christo Ananth
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfRagavanV2
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...ranjana rawat
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756dollysharma2066
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...SUHANI PANDEY
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Bookingdharasingh5698
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesPrabhanshu Chaturvedi
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
Vivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design SpainVivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design Spaintimesproduction05
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 

Kürzlich hochgeladen (20)

KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdf
 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdf
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and Properties
 
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
Vivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design SpainVivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design Spain
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 

Red Hat Openshift Fundamentals.pptx

  • 2. Red Hat Openshift Fundamentals • Getting started with Red Hat’s Openshift Container Platform • OpenShift makes it easier to deploy applications in an enterprise environment • Allowing developers to roll out applications as fully operational containers • Allows administrators to manage the application lifecycle in a flexible way • So applications can be monitored and scaled as needed
  • 3. Lessons • Understanding OpenShift • Installing OpenShift • Getting Started with OpenShift • Managing OpenShift Networking • Deploying Applications • Managing OpenShift storage
  • 4. Requirements Recommended that you are… • Comfortable with Linux • A bit of experience in containers & Kubernetes Hardware Requirements: Using MiniShift Single VM that needs 4GB RAM & 20GB Disk Space Full fledged OpenShift cluster For running 3 node cluster, with 3 VM, requires 12GB of RAM and 80GB of disk space
  • 5. Introducing OpenShift We will learn all we need to get started with OpenShift
  • 6. Lesson 1: Understanding OpenShift Objectives: • Understanding Containers & OpenShift • Understanding Red Hat Container Management Solution • Understanding OpenShift in a Container Environment • Understanding OpenShift in a DevOps environment • Understanding OpenShift Architecture • OpenShift vs Kubernetes (which feature is similar, and which feature is different) • Understanding the role of OpenShift in Hybrid Cloud environment
  • 8. Understanding OpenShift • OpenShift Container Platform (OCP) allows developers to easily build an environment based on source code that you insert into the system • Using OpenShift allows developers to bring applications to market without any delay • OpenShift supports code written in many programming languages • OpenShift is a PaaS solution that is built on top of Kubernetes • The result is a container that will be orchestrated by the integrated Kubernetes layer
  • 9.
  • 10. On Prem vs IaaS vs PaaS vs SaaS
  • 11. Understanding Containers • Containers are the modern-day replacement of applications that are installed on servers • Containers contain all dependencies that are required to run an application and are started on top of a container engine • Containers do not include a kernel, but run on the host OS kernel • Docker is the most common container solution • Docker engine is a common engine, but not the only one: example; in RHEL 8, containers can run natively on top of the RHEL OS. It is still a fast-moving technology that’s always subject to change.
  • 12. PaaS? • Platform as a service (PaaS) is a category of cloud computing services that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app (Wikipedia) • OpenShift is a PaaS solution that adds different PaaS features to a Kubernetes/Docker environment • Remote management • Multitenancy • Security • Monitoring • Application life-cycle management • Auditing
  • 13. Understanding Kubernetes • Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services • Containers needs to be orchestrated • When running containers are running on an enterprise environment, you will need a HA system, which needs to be orchestrated • Created by Google, based on Google Borg, since 2014 • Kubernetes orchestrates computing, networking, and storage infrastructure • OpenShift is build on top of Kubernetes, so that OpenShift doesn’t have to recreate everything, and currently Kubernetes is the de-facto standard for container orchestration
  • 14. Understanding Red Hat Container Management Solution
  • 15. Understanding Podman • RHEL 8 includes Podman, a solution to run containers natively on top of RHEL • No need for Docker • Podman is for stand-alone containers, and is useful to run individual containers without any enterprise features • If the host fail, the container will also fail, and no other host will take care of the container • Difference with Docker: Podman runs containers with random UID and not as root
  • 16. Containers Operating System • Containers can run on top of a full Linux distribution • For increased efficiency, it’s better to run containers on top of a container OS • Container Linux (formerly CoreOS) is a container OS that was acquired by Red Hat • Already integrated in OpenShift as a container OS that has been developed for a while
  • 17. OpenShift • OpenShift is a platform that integrates container management and application builds in an enterprise platform • OpenShift exists in different forms • OKD (previously known as OpenShift Origin) – free • OpenShift Container Platform – Red Hat Solution – commercial • OpenShift online – Multitenant version of OpenShift with infrastructure managed by Red Hat • OpenShift on Public Cloud Platforms • Azure • AWS • Google Cloud Platform • IBM Cloud
  • 18. Understanding OpenShift in a Container Environment
  • 19. Using OpenShift to Manage Containers How do we manage containers? • Kubernetes is the de facto standard for managing and orchestrating containers • OpenShift is not required for managing containers, but offers some significant benefits over Kubernetes • Strict security policies – much more secure than default Kubernetes • Routers make it easier to access applications • Better management of container images • S2I – Source to Image; Developers can automatically build container from the source code. Even can trigger a new build when the source code is changed.
  • 20. Understanding OpenShift in a DevOps Environment
  • 21. Understanding CI/CD • Continuous Integration (CI) is the integration of source code from multiple authors into a shared source code management (SCM) repository • Git is such an SCM repository • Such environment supports multiple changes per day • In OpenShift, Git push events can be captured and result in a new containers that are automatically created • The result is Continuous Delivery (CD), an environment where new versions of the software are automatically deployed • In the flow of CI/CD process, pipelines play important an important role
  • 22. Understanding Pipelines Pipelines are a representation of all steps in the CI/CD process • Build • Test • Packaging • Documentation • Reporting • Deployment • Verification Common Tools to work with Pipeline is Jenkins
  • 23. Understanding OpenShift and DevOps • For DevOps, using Infrastructure as Code is an important goal • OpenShift goes beyond that, and offers a solution to automate the build of containers, without needing to know anything about infrastructure • Containers are a perfect solution to isolate the responsibilities of the developers and operations teams • To do so, Pipelines are integrated. Pipelines are a solution that allows teams to automate and organize all activities required to deliver software changes • These pipelines are offered through integrated Jenkins Pipelines • OpenShift supports all five stages of the DevOps application lifecycle
  • 24. OpenShift and the DevOps Lifecycle • Build: Developers can build applications quick and easy, without the need for IT operations to set up anything • Test: Continuous Integration (CI) is offered through built-in Jenkins CI server and lets developers integrate code automatically with every change • Operate: Continuous Delivery (CD) is offered using Pipelines to automate every step of the application delivery • Deploy: Auto-scaling features ensure that all times, the number of required instance is available • Monitor: Metrics, health check, and self healing ensure that the environment stays healthy
  • 25. OpenShift Architecture Master Node: - API - Authentication - Replication - Scheduler RHEL / Atomic Worker Node 1 RHEL / Atomic C1 C2 C3 C4 Worker Node 2…n RHEL / Atomic C1 C2 C3 C4
  • 27. Understanding OKD • OpenShift is using the OKD Project as upstream • OKD = OpenShift Kubernetes Distribution • Kubernetes is an important part of OpenShift • OKD is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams. OKD is the upstream Kubernetes distribution embedded in Red Hat OpenShift. (okd.io)
  • 28. Understanding OpenShift on Kubernetes • OpenShift adds features on top of Kubernetes, but uses the core Kubernetes infrastructure • OpenShift adds resource types to the Kubernetes environment and stores them in Etcd • Most OpenShift services are implemented as Docker container • OpenShift adds xPaaS, a middleware services that can be offered as PaaS, by adding JBoss middleware solution • xPaaS = aPaaS, iPaaS, bpmPaaS, dvPaaS, mPaaS + OpenShift • Some Kubernetes resource types are not available in OpenShift
  • 29. Understanding the Purpose • Kubernetes focuses on providing container orchestration • OpenShift adds features to that: • A build strategy to build source code • Built in container registry • Version control integration • Security
  • 30. Shared Resource Types • Kubernetes and OpenShift share some resource types: • Pods • Minimal entity that is managed in OpenShift or Kubernetes environment • Typically contains a container • OpenShift doesn’t run container by themselves, but in order to run container, OpenShift manages Pods • Usually only contains one container, but it depends on the Microservices architecture • Namespaces • Called projects in OpenShift • Provides a strictly isolated environment offered by the Linux kernel • Impossible for pods running in one namespace to interfere pods that are running in different namespace • Deployment Config • The configuration file that defines the application • One of the things it does is taking care of the replication, the number of instance of an application that you want to run • Services • Exposing the application to the outside world • Persistent Volume and Volume Claims • Used for setting up storage • Persistent storage is the external storage that you want to use in OpenShift environment • Volume claim is the claim that the deployment config use, and put in that persistent volume • Volume claim allows the deployment config to tell the persistent storage, “hey I need 5GB” • Secrets • Solution to store secret information and connect that to the pod (API keys, password, SSH keys, etc)
  • 31. OpenShift Resource Types Some resources types are unique to OpenShift • Images • Product that delivered by Source To Image • In Kubernetes, usually the image is coming from Docker, or a manually created image • OpenShift integrates the image build process • Image Streams • A tagged reference to image; tag can be used to assign new version numbers, etc • Templates • Allows you to run application in a standardized way • Build Config • How configuration is built in OpenShift environment • Routes • Solution that allows you to create a DNS, FQDN, which can be used to access the application publicly (over the Internet, internal network, etc) • No such thing as this resource type in Kubernetes
  • 32. OpenShift in a Hybrid Cloud Environment
  • 33. Understanding Hybrid Cloud • Hybrid Cloud is a cloud that combines different types of cloud services • This can be a private cloud vs public cloud • But also IaaS cloud and PaaS cloud • OpenShift is a hybrid cloud solution, as it allows you to run containers on any IaaS cloud solution • The IaaS cloud is a solution managing large infrastructure • OpenShift is the solution to easily deploy an application on top of that Infrastructure • In an OpenShift context, the Hybrid Cloud provides ultimate flexibility by combining containers and IaaS cloud
  • 34. Understanding the IaaS Layer • The IaaS layer offers flexibility in deploying an infrastructure • OpenShift can be installed on a traditional physical data center • But for more flexibility to scale up host machines in a dynamic and automated way, we need IaaS cloud • In IaaS, every part of the infrastructure can be automated • Virtual machine • Storage volume • Subnets • Firewalls • If we install OpenShift on top of IaaS, we could have two layers of automation, in the infrastructure level, and application level • Automated deployment offers the flexibility that is required to easily scale up application • With just IaaS, it’s difficult to have an automated application deployment. Only at the Infra level
  • 35. Understanding the OpenShift Layer • OpenShift allows developers to define an application in a simple YAML file that will fetch the source code from a GitHub repository • OpenShift on IaaS allows developers to focus on the application, while ignoring the required underlying infrastructure • Ansible can be used for full integration and automation: Ansible is the solution for automation of everything
  • 36. Hybrid (IaaS+PaaS) Cloud Environment Node Node Node Control Compute Compute OpenStack VM1 VM2 VMn C1 C2 C3 C4 VMn+1 Worker Master OpenShift Worker Worker
  • 37. Lesson 2: Installing OpenShift • Understanding OpenShift Versions • Installing Minishift • Using oc cluster up
  • 39. OpenShift Installation Options • Red Hat OpenShift • Licensed version of OpenShift, used by companies and enterprise • Can be installed as an on-premise cluster • Can be also installed in Public or Private Cloud • OKD • Community Supported • Minishift (POC only) • Nice way to get to learn OpenShift • Only require 4GB of RAM • OKD in a container: oc cluster up • OKD in public or private cloud • Install as an on-premise cluster
  • 41. MiniShift Installation Options • MiniShift is available for different operating systems • You will need a hypervisor • MacOS: xhyve • Linux: KVM • Windows: Hyper-V • Cross Platform: VirtualBox • Basically it’s a VM
  • 43. Managing Minishift Addons • Minishift, by default, has a couple of restrictions which make it so certain security settings won’t work • To make MiniShift more relaxed, you’ll need to enable some addons: • minishift addon list - shows current addons • minishift addon enable admin-user – creates a user with cluster admin permissions • minishift addon enable anyuid – allows you to login using any UID • It makes more sense to use admin user in Minishift, since you will need admin user for doing infrastructure related tasks, and it’s most probably will be a single user environment
  • 44. Installing the OpenShift Client • The oc client is used on all types of installations • Download the client software from www.okd.io • Extract and copy the oc binary to /usr/local/bin or add to environment variable • After extracting, type oc or oc status to verify the command availability
  • 45. Add minishift & oc to environment variable • Start > “Edit the system environment variable” • Environment Variables…
  • 46. Try some commands • minishift addons list • oc status • oc whoami • oc login –u developer –p anything
  • 48. Understanding oc cluster up • Running a couple of containers directly on top of docker • Requirement: Docker CE and OpenShift client • oc cluster up method uses Docker engine and the OpenShift client utility to spin up a proof-of-concept cluster • Use it as an alternative for Minishift
  • 49. Using oc cluster up • Always check the current version of the documentation • Install docker-ce • Edit file: /etc/docker/daemon.json { “insecure-registries”: [“172.30.0.0/16”] } • This is to allow running Docker registry in a private network • systemctl daemon-reload; systemctl restart docker • Disable the firewall • Docker run nginx to create local config to start a random container • Type sudo oc cluster up, takes about 10-15 minutes • Check using docker ps • Shutdown: oc cluster down
  • 50. Lesson 3: Getting Started With OpenShift • Getting Started with the Web Console • Understanding Resource Types: Pods & Namespaces • Understanding Resource Types: Deployment Configs & Networking • Managing Resources from the Command Line • Using Source-to-Image to Create Application • Basic OpenShift Troubleshooting
  • 51. Getting Started with the Web Console
  • 52. Understanding Projects - 1 • OpenShift is oriented around the project • An isolated environment • Different items exist within project • Applications: the container that is providing services • Builds: the process that defines how to build the container from repo • Resources: additional optional configuration • Storage: persistent storage that can be used by the applications • Tip: OpenShift cheat sheet • https://is.gd/openshift_cheatsheet
  • 53. Understanding Projects – 2 • In OpenShift, you would deploy applications (microservices). Each application consists of different projects, where a project is a part of the application stack • Projects: a project is a Kubernetes namespace that contains all services running in the OpenShift application and works as a strictly separated environment • Useful in multi-tenant deployment; where customer A and customer B can have a completely separated environment • Namespace are implemented by the Linux kernel; it separates the network, filesystem, etc • Specific users may have access to specific projects only • Type oc config get-contexts to see all current projects (all users) and oc projects to see your current projects (your account) • After logging in, you’ll see which projects you have access to • Use oc project myproject to switch to a different project • Resources will always be specific to a project • If you run an application in a project, it will not be visible in another project
  • 54. Demo: Creating an Application • From Catalog, select PHP, version 7.1 • Provide a name to the application • Specify the git repository to use • https://github.com/WordPress/wordpress.git • Click Create to launch, next close that window • Now get to Overview, where you can see the application is being built. Click it to see details • Now, select Builds where you can see the actual application • Further click on the application details to explore what it is doing • At the end of the build, an image is created and pushed to the OpenShift container registry • Check success in the Events log • Check routes, it contains the DNS name to get to the application
  • 55. Understanding Resource Types: Pods and Namespaces • OpenShift runs containers • But OpenShift doesn’t manage containers, it manages pods • It uses Deployment Config to manage pods
  • 56. Understanding Resource Types • The result of your efforts in OpenShift, is a microservice – also referred as an app • The app is created in an OpenShift Project, which corresponds to a Kubernetes namespace – an isolated environment implemented by the Linux kernel • An app consists of different resources – like a building block • The resource types are specified in the OpenShift API • OpenShift API defines the resources types, if the API is updated, new resource will be available • As OpenShift is built on top of Kubernetes, most resource types from the Kubernetes API are also supported • There are two options to create an app (and all required resources) • Use oc new-app • Create a manifest file in YAML to identify all the different resources
  • 57. Understanding Namespaces • Namespace is an important part of OpenShift, from architect point of view • A Kubernetes namespace is a group of isolated resources that behaves as a cluster, in OpenShift we call this a project • Namespaces implement isolation at the Linux kernel level and are available at the different levels • mount -> filesystem; only present specific one specific area of the filesystem • PID -> process table, each container only can see its own PID table only, you cannot see what’s happening in another namespace • network -> makes every namespace an isolated network, each namespace can only be communicated through routing • IPC -> inter-process communication is limited only to processes within the namespaces, not possible to make communication to outside of the namespace • User ID -> you can have user with the same id and name in a different namespace, as if they are in different computer • Cgroup -> Linux feature that allows to do resource allocation, to make sure that every container has dedicated RAM, CPU cycles, and so on • Because of using namespaces, strictly isolated environment can be implemented
  • 58. Understanding Pods • An application is defined in an image • Analogy: it’s like an ISO file, installer • A container is a run-time instance of an image • A Pod is a solution to run groups of containers • Using Pods allows you to group multiple applications • Usually we will only have one container in a Pod, as it is the Microservices best practices • Containers in a pod have an isolated pid namespace and filesystem namespace, but share the same network namespace, volumes, and hostname. • Containers in the Pod will always run on the same host • It’s not possible to spread out containers if they are in the same pod
  • 59. Demo • oc whoami • oc get pods • Get information • -build is revealing information of build process, getting source from the repo etc. • oc get all • We did not create pod, we created application • It lists all the components / resources when we created the application • The most important here is the deploymentconfig; it is what will be used to run the different pods
  • 60. Create a yaml file to create pod – helloworld.yaml apiVersion: v1 kind: Pod metadata: name: examplepod spec: containers: - name: ubuntu image: ubuntu:latest command: [“echo”] args: [“hello world”]
  • 61. Understanding Resource Types: Deployment Configs and Networking
  • 62. Understanding Deployment Config • To run Pods, you’ll start a Deployment Config, as these add useful features to the Pods • From user’s perspective, we’re creating a new app, and creating a new app is creating a new deployment config • One of the feature: Replication Controller, which takes care of the replication of pods, and is a part of the Deployment • Update Strategy is also a part of the deployment • Rolling update: maintains the desired amount of pods • Recreate: stops all Pods and deploys new Pods • Custom: allows you to run any command in the deployment • Triggers define when a new deployment should be created
  • 63. Understanding Deployment Triggers • When critical components change, you would like a new deployment to be generated automatically • Use oc describe on a deployment and look for triggers to figure out the default triggers • ConfigChange: triggers a new deployment on configuration change • Image: triggers a new deployment when a new image is available • Manual triggers can be issued, using oc deploy myapp –latest
  • 64. Understanding Replication Controllers • The Replication Controller (RC) is a part of the Deployment Config • RC uses labels and selectors to track availability of Pods • Every pod by default has a label • Manual labels can be set as well • The RC uses a selector to specify which labels should be used • Use oc get pods –show-labels to show the labels that the OS has automatically added • Use oc describe rc <name> to see the current selector that is used
  • 65. Understanding Services and Route • If you look at the overview tab in OpenShift, you can see available applications, including the URL you need to access the application • On replicated application, there’s a load balancer behind to decide which pod to connect to • The service takes care of load balancing, and gives one identity • The route is what gives a published URL, and what allows access to the application from outside the cluster • Route on K8s is based on the ingress controller but needs additional configuration
  • 66. Demo • oc get dc • Demo app; in web it’s called app • Triggered by, deployment will be triggered by these values • oc get rc • Information about the replication controller • How many replica are there? • oc get pods --show-labels • The labels are shown here, app=demo-app, etc • It connects the pods to the deploymentconfig • oc describe rc demo-app-1 • We can see the complete configuration • Name, namespace, selector, labels, replica, strategy, status, containers, image etc
  • 67. Demo: Managing Resources from the Command Line • oc login –u developer –p anything • oc new-project firstproject • oc new-app --docker-image=nginx:1.14 –name=nginx • oc status (use repeatedly to trace the process) • oc get pods • oc describe pod <podname> • oc get svc • oc describe service nginx • oc port-forward <podname> 33080:80 • curl –s http://localhost:33080
  • 68. Demo: Creating another App • oc whoami • oc new-project mysql • oc new-app --docker-image=mysql:latest --name=mysql-openshift –e MYSQL_USER=myuser –e MYSQL_PASSWORD=password –e MYSQL_DATABASE=mydb –e MYSQL_ROOT_PASSWORD=password • oc status –v • oc get all • oc get pods –o=wide • Login to the webconsole and see the new app in different project
  • 69. Using Source-to-Image to Create Applications • An important part of OpenShift that allows developers to automatically build container based on source code on a git repo
  • 70. Understanding S2I • To create Images automatically, a Dockerfile could be used • Source 2 Image (S2I) takes application source code from a source control repository (such as Git) and builds a base container based on that to run the application • While doing so, the image is pushed to the OpenShift registry • Using S2I allows developers to build running containers without the need to know anything about the specific OS platform • S2I also makes it easy to patch: after updating the application code a new image is generated • This process is handled as a rolling upgrade
  • 71. Image and Image Streams • OpenShift works with Image Streams • An Image Stream is a consolidated view on related images • An image is a runtime template that contains all data that is needed to run a container • This includes metadata that describes image needs and capabilities • Images in an image stream are identified by a tag, and can be specified as such • image=nginx:1.8 • Two types of images exist • Builder images are used in the S2I process to build applications • The result is a runtime image that is used to start an application • Like an ISO file that used to spin up the application
  • 72. Exploring Builder Images • Default Builder Images are available in OpenShift • Check the Catalog in the browser interface • PHP, etc • Or use oc get is –n openshift for an overview • Alternatively, builder images can be created by the administrator
  • 73. Understanding the S2I flow • To build an image based on source code, base image is required, this image is known as the builder image and is used as a runtime environment • Base builder image such as Python and Ruby are included • Builder Images are available in the catalog that you see in the web interface • When either the application source code or the builder image gets updated, a new container image can be created • Applications need to be updated after a change of either the application code, or the builder image itself • Applications are built against image streams, which are resources that name specific container images with image stream tags • The base S2I images may be obtained from a trusted repository, or can be self-built
  • 74. Building an Application - 1 • The oc new-app command is used to build the application from a Git repository • Use oc new-app php~http://github.com/sandervanvugt/simpleapp --name=myapp to build the application from the git repository • In this command, the php part in front of the URL indicates the image stream that is to be used • If no image stream is given, the oc new-app command tries to detect which image stream is used based on the presence of some files • Use oc-o yaml new-app php~ http://github.com/sandervanvugt/simpleapp --name=myapp > s2i.yaml to automatically generate a YAML definition file that contains all resources to be created • The app itself is NOT created
  • 75. Building an Application - 2 • After creating the new application, the build process starts. Type oc get builds for an overview • A buildconfig can be used to trigger a new build • The BuildConfig pod is responsible from creating images in OpenShift and pushing them to internal Docker registry
  • 76. Explore New App YAML file • Kind: Imagestream • Kind: BuildConfig • Source: describe where the source is coming from • Strategy: Defining how we want to build the source • Kind: DeploymentConfig • Labels that we have set • Number of replicas • Containers that has been built in previous step • :latest as the latest image
  • 77. Demo: Building an Application • oc logs –f bc/simple-app to track the progress • oc status – simpleapp is now deployed • oc get all • Now we have the pod, replicationcontroller, service, deploymentconfig, and buildconfig • oc get builds • Info about the build that we just done • oc describe builds simple-app-1 (name of the build from prev command)
  • 78. Basic OpenShift Troubleshooting • oc get events will show recent events • oc logs <podname> will show what has happened on a specific pod • oc describe pod <podname> will show all pod details • oc projects will show all projects, you might be in the wrong project! • oc delete all –l app=simpleapp will delete everything using that label • When we create an app we also create a Pod, DeploymentConfig, ReplicationController, BuildConfig, etc. It’s better we delete all of them based on the label • oc delete all –all • Delete everything in the current project
  • 79. Part 2 – Managing & Deploying OpenShift
  • 80. Lesson 4 – Managing OpenShift Networking • Understanding Software Defined Networking • Understanding OpenShift SDN • Understanding Services • Understanding Routes • Creating Routes
  • 81. Understanding Software Defined Networking Node1 Node2 Node3 Routed ins1 ins2 ins3 SDN Underlay Overlay Direct
  • 82. Understanding OpenShift SDN • On Docker, containers connect to host-only virtual bridge • Communication with containers on other hosts goes through port mapping • Container ports are bound to ports on the host • OpenShift SDN decouples the control plane from the data plane and thus implements SDN • SDN is implemented with plugins • A plug-in adds knowledge about specific networking to the infrastructure • The cluster network is created using Open vSwitch • Master nodes do not have access to containers, unless this was specifically enabled • This is a security feature
  • 83. Understanding OpenShift SDN Plug-ins • ovs-subnet: provides a flat pod network where every pod can communicate with every other pod and service • It is an Open vSwitch plugin, hence ovs • ovs-multitenant: isolating networking to project. • Each project get its own Virtual Network ID • Pods can only communicate with Pods that share this VNID • Pods with VNID 0 can communicate with all other pods and vice-versa • Usually for management / administrative pods • The default project (all the management containers for OpenShift) has a VNID of 0 • ovs-networkpolicy: allows administrators to define their own policies • To do so, NetworkPolicy objects are used
  • 84. Understanding Pod Networking • Each pod has its own unique IP Address • Containers within a pod behave as if they are all on the same host • As mentioned previously, each pod usually only has one container • As a result, pods are treated like physical or virtual machines • To access Pods, services are used
  • 85. Understanding Pod Networking Pod1 IP Addr C1 C2 C3 Each containers can only be accessed through ports, as it has only 1 IP address from the outside
  • 86. Understanding Services • Services implement round-robin load balancing to access pods • We can have multiple pods that is similarly presented to the end user; let’s say we have replicas • We need to load balance them • The service has a stable IP address and allows communication with pods for external clients • Services also allow replicated pods to communicate to one another • Services use a selector attribute to connect to Pods • Each pod matching the selector is added to the service resource as an endpoint • Pod as well as service IP addresses cannot be reached from outside the cluster (pod uses a private IP) • We will use a router instead, to be able to access the pods externally
  • 87. Understanding Services - apiVersion: v1 kind: Service metadata: labels: app: my-app name: my-app spec: ports: - name: 8080-tcp port: 8080 protocol: TCP targetPort: 8080 nodePort: 38080 selector: app: my-app deploymentconfig: my-app type: NodePort Selector for choosing which app is going to be managed by this service Exposed port
  • 88. Getting Traffic in and out of the Cluster Three methods exist for clients that need access to the OpenShift service • HostPort/HostNetwork: clients can reach the Pod directly by using forwarded ports. Ports in the pod are bound to pods on the host where they are running. Escalated privileges are required to use this method • Not flexible, as it requires privilege escalation, thus not very common • NodePort: the service is exposed by binding to available ports on the node host. The node host proxies connections to the service IP address • NodePort supports any traffic type • Nodeports are in the range of 30000-32767 by default. This can be changed • If not specified, a random nodePort is assigned by OpenShift. • One usually specifies the port in the default range, as shown in previous YAML example • OpenShift routes: services are exposed using a unique URL • Routes support HTTP, HTTPS, TLS with SNI and WebSockets only • Web based protocol, like a reverse proxy
  • 89. How They All Interconnect To Each Other P1 P2 P3 P4 S: 8080 S: 8080 RR - LB Nodeport Nodeport VIP: 1.2.3.4 Route External (DNS)
  • 90. Understanding Routes • OpenShift routes allow network access to pods from outside the OpenShift environment • If you want your app to be accessed by external users, you will need route • A dedicated router pod is used to load-balance traffic between the target Pods • The router pod uses HAProxy and can be scaled itself • The router pod queries the Etc database on the OpenShift master to get information about the Pods • The router exposes a public-facing IP address and DNS hostname to the internal Pod networking • Routers connect directly to the Pods; the service is used for Pod lookup only but not involved in the actual traffic flow
  • 91. Router YAML code - apiVersion: v1 kind: Route metadata: creationTimestamp: null labels: app: my-app name: my-app spec: host: externaldnsname.apps.example.com port: targetPort: 8080-tcp to: kind: Service name: my-app
  • 92. Routers – Behind The Scene • oc whoami • Need to be system:admin • oc projects • oc get all –n default • pod/router-xxxx • oc describe pod/router-xxxxx –n default
  • 93. Creating Routes • oc expose service my-app –name my-app [--hostname=my- app.apps.example.com] to create a route on top of an existing service • Specify DNS name only if this name can be resolved to a wildcard DNS domain name • If a DNS name is not specified, a name will be automatically generated • Alternatively, use oc create combined with a YAML or JSON file • Note that oc new-app does NOT create a route • Because you don’t want your newly deployed application automatically exposed, for security reason • Use oc delete route to un-expose a service
  • 94. Managing Router Properties • The default routing subdomain is set in the master-config.yaml OpenShift configuration file routingConfig: subdomain: apps.example.com • Notice that the router must be able to bind to port 80 and 443, do NOT run a router on a host that already uses these ports for something else
  • 95. Understanding Router Types • Secure routers can use several types of TLS termination • Edge Termination: TLS is terminated at the router, and traffic from router to Pods is not encrypted • Pass-through Termination: the router sends TLS traffic straight through to the Pod and the Pod is responsible for serving certificates • Re-encryption Termination: the router terminates the TLS traffic and re- encrypts traffic to the endpoint • Unsecure routers don’t do TLS termination, so it is easier to setup
  • 96. Try To Create Routes • oc whoami • As developer • oc get all • Find out what pods and service do we have • oc expose [servicename] • oc expose svc/httpd • oc expose httpd –name httpd • oc get all • Now it’s there • oc describe route [routername] • oc describe httpd • Pay attention to Requested Host: • Endpoints: -> how we get to the Pod
  • 97. Lesson 5: Deploying Applications • Scaling Applications • Scheduling Pods • Managing Images and Image Streams • Managing Templates
  • 98. Understanding Application Scaling • Application Scaling is handled by the replication controller • The replication controller ensures that the number of pods that is specified in the replica count is running at all times • To do so, the replication controller monitors the pods by using tags as the selector • This selector is a set of labels that exists in the Pod as well as in the Replication Controller • Replication Controllers can be managed directly, but it’s recommended to manage them through Deployment Configs
  • 99. Scaling Applications The number of replicas can be scaled manually or automatically using Autoscale • Manual Scaling • oc get dc • oc scale –replicas=5 dc simpleapp • Autoscaling • The HorizontalPodAutoscaler resource type is used to automatically scale based on current load on application pods
  • 100. Understanding Autoscaling • HorizontalPodAutoscaler used performance metrics that are collected by the OpenShift Metrics subsystem • If this system is in place, use autoscale dc/myapp --min 1 --max 10 -- cpu-percent=80 to automatically scale • This command creates a HorizontalPodAutoscaler object that changes the number of replicas such that the pods are kept below 80% of CPU usage
  • 101. Manual scaling • oc –o yaml sample-app php~https://github.com/sandervanvugt/simpleapp – name=simpleapp > s2i.yaml • Open the yaml file • Goto: DeployMentConfig • replicas: 1 • Standard replication • Deploy the app • oc get dc • Now we can see the replicas
  • 103. Understanding Pod Scheduling • Pods by default are distributed between the nodes in a cluster • The scheduling process can be manipulated, using different items • Zones and Regions • Node labels • Affinity rules and anti-affinity rules • All nodes, including the master can run Pods • You should only run the web console Pod on the master • Use the Ansible variable osm_default_node_selector to enable/disable running pods on the master • This is configured during installation of OpenShift cluster
  • 104. Understanding the Pod Scheduler Algorithm • Pod scheduling is a 3-step process • Filter nodes • The scheduler filters nodes according to node resources that are required by pods • Maybe some pods require something like, an SSD storage • Node selectors can be used in this process • Pods can also request access to specific resources • Prioritize the filtered list of nodes • Affinity rules: used to ensure that Pods that belong together run close to each other • Anti-affinity rules: ensures that Pods will not run close to each other • Select the best fit node • The algorithm applies to score to each node • The node with the highest score will run the pod
  • 105. Understanding Topology • Topology can be applied to make scheduling easier in large datacenters • In Topology, there is a region, zone. • A region is a set of hosts with a guaranteed high-speed connection between them, typically in the same geographical area • A zone is a set of hosts that share the same infrastructure components (network, storage, power), and for that reason might fail together • Resources that runs in the same rack in a DC • OpenShift can use region and zone labels in pods • Replica pods are scheduled on nodes in the same zone by default • Replica pods are scheduled on nodes with a different zone label
  • 106. Setting Topology Labels • By default, nodes get the region=infra label • Administrators can use the oc label command to set labels on nodes • oc label node node1.example.com region=eu-west zone=rack1 –overwrite • oc label node node2.example.com region=eu-west zone=rack2 –overwrite • To show nodes and their labels, use oc get node node1.example.com –show-labels
  • 107. Taking Down a Node Sometimes you need to take down a node • To take down a node, OpenShift has a two-step process • First, mark the node as unschedulable: oc adm manage-node -- schedulable=false node1.example.com • Next, drain the node. This will destroy all pods on the running node so that they are created somewhere else: oc adm drain node1.example.com • Once finished, use oc adm manage-node --schedulable=true node1.example.com
  • 108. Using Node Selectors • Node labels and node selectors can be used to ensure a Pod is scheduled on a specific node • Node selectors are a label that is set on the node • To set a node selector, change the pod definition using oc edit or oc patch • oc path cd myapp --patch ‘{“spec”:{“nodeSelector”:{“env”:”qa”}}}}’
  • 109. Understanding the Default Project • Upon installation, the default is created • In bigger clusters, it’s a good idea to use this project to run infrastructure pods such as the router and internal registry • To do this, label dedicated with the region=label • Next, use oc annotate to add this label to the namespace, using a node selector: oc annotate –overwrite namespace default openshift.io/node-selector=‘region=infra’ • This will make sure that the default will be serviced on specific nodes only
  • 110. Managing Images and Image Streams
  • 111. Understanding Images • An image is a deployable runtime template that includes all that is needed to run a container • In OpenShift, a single image can refer to different versions of the same image. Docker does not use version numbers, but tags to refer to specific versions of an image • An image stream comprises a number of container images identified by tags • It is a consolidated view of related images • In OpenShift, deployments and builds can receive notifications when new images are added, and as a result trigger a new build or deployment to be started
  • 112. Getting Images • OpenShift has many ways to get an image • Use default images from the image repositories • Use S2I to build images based on source code • Use Dockerfile to build your own image and store it in the internal registry • Use buildah to build custom images
  • 113. Understanding Tags • Tags are used to identify what it is that an image contains • Tags should be set and used in a way that they are updated if a new version is available • myimage:v2.0.1 is a good tag • myimage:v.2.0.1-nov20 is not a good idea • For example, a developer that has an Apache image, can tag it with the Apache version that is in the image, as apache:2.4 • oc tag command is used for tagging images • oc tag nginx:latest nginx:1.12 would make that the “latest” tag always refers to version 1.12 • So users will always use the latest software version
  • 114. Understanding Templates • A template is a ready-to-use file that allows you to create multiple related objects in OpenShift in an easy way • Templates contain not just the objects, but also the parameters that you want to be edited • Templates can be used to create any object • Administrators can write their own templates in YAML or Json, or instant app and quickstart templates can be used
  • 115. Instant App and QuickStart Templates • OpenShift comes with some default instant app and quickstart templates • These make creating applications for different languages easier • Use the Catalog in the web interface to get started with a specific template • Or use oc get templates -n openshift to show templates • oc process --parameters mysql-persistent –n openshift will show parameters supported by a template • oc process -o yaml -n openshift mysql-persistent shows a generated template where all parameters have obtained a default value
  • 116. Creating Custom Templates • To easy creation of objects, you can create your own custom templates • To create an app, use oc new-app –templates=your-template • It’s a good idea to set default parameters in the template, but you can overwrite these parameters as well: oc new-app –template=your- template -p WEB_SERVER=httpd
  • 117. Demo • oc get templates • oc get templates –n openshift • oc process --parameters mysql-persistent --n openshift • oc process -o yaml -n openshift mysql-persistent • Kind: Secret • Contains password, username etc • DeploymentConfig • Replicas, name, containers with environment variables
  • 118. kind: Template apiVersion: v1 metadata: name: demo-template labels: role: web message: Deploying ${WEB_SERVER}
  • 119. objects: - kind: Pod apiVersion: v1 metadata: name: tdemo-pod spec: containers: - name: ${WEB_SERVER} image: ${WEB_SERVER} - kind: Service apiVersion: v1 metadata: name: tdemo-svc spec: ports: - port: 80 selector: role: web - kind: Route apiVersion: v1 metadata: name: tdemo-route spec: to: kind: Service name: tdemo-svc
  • 120. parameters: - name: WEB_SERVER displayName: Web Server description: Web server image to use value: nginx
  • 121. Try the previous YAML oc new-app --template=demo-template
  • 122. Cleanup oc delete all -l role=web
  • 123. Try the previous YAML with Environment Variable oc new-app --template=demo-template -p WEB_SERVER=httpd
  • 124. Managing OpenShift Storage • Understanding OpenShift Storage • Configuring OpenShift Storage Access • Setting Up NFS Persistent Storage • Working With ConfigMaps
  • 125. Understanding OpenShift Storage • By default, container storage is ephemeral (temporary) • OpenShift uses Kubernetes persistent volume to provide storage for pods • In persistent storage, data is stored external to the Pod, so if the containers shut down, the data is still available • Persistent storage is typically some kind of networked storage provided by the OpenShift administrator • Persistent volumes are objects that exist independent of any Pod • Developers create a persistent volume claim (PVC) that requires access to persistent storage without the need to know anything about the underlying infrastructure
  • 126. Supported Persistent Storage • NFS • GlusterFS • OpenStack Cinder • Ceph RBD • AWS Elastic Block Store • GCE Persistent Disk • Azure Disk and Azure File • VMware vSphere • iSCSI • Fibre Channel • EmptyDir • and others
  • 127. Persistent Volume Access Modes • The access modes depends how nodes can access the storage • ReadWriteOnce: a single node has read/write access (only 1 node) • ReadWriteMany: multiple nodes can mount the volume in read/write mode • ReadOnlyMany: the volume can be mounted read-only by many nodes
  • 128. Determining Storage Access • The storage access type in a PVC is matched to volumes offering similar access modes • If a developer define RWO in the PV Claim, then it will find a persistent volume that matches the same RWO configuration • Optionally, the PVC may request a specific storage class, using the storageClassName attribute. In that case, the PVC is matched to PV’s that have the same storageClassName set • Force the pod to use some kind of storage • The PVC is not connected to any specific PV in any way • The Pod itself has a connection to the PersistentVolumeClaim, NOT to the Persistent Voume
  • 130. Creating PVs and PVC resources • Objects need to be created in the right order • First, the PersistentVolumes need to be created • Next, the PersistentVoumeClaims are created • Finally, the Pods are configured to use a specific PVC
  • 131. Using NFS for Persistent Volumes • Mapping between containers and UIDs of an NFS Server doesn’t work as container UIDs are randomly generated • To use NFS share as an OpenShift PV, it must match the following requirements • Owned by nfsnobody user and group • Permission mode set to 700 • Exported using all_squash option • Consider using async export option for faster handling of storage requests
  • 132. Set Up NFS Storage • yum install –y nfs-server • mkdir /storage • chown nfsnobody.nfsnobody /storage • chmod 700 /storage • echo “/storage *(rw,async,all_squash)” >> /etc/exports • systemctl enable --now nfs-server
  • 133. Create PV • nfs-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv spec: capacity: storage: 2Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: path: /storage server: 172.17.0.1 readOnly: false
  • 134. Add the PV to OpenShift • Oc login –u system:admin –p anything • Oc create –f nfs-pv.yaml • Oc get pv | grep nfs • Oc describe pv nfs-pv
  • 135. Create PVC • nfs-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nfs-pv-claim spec: accessModes: - ReadWriteMany resources: requests: storage: 100Mi
  • 136. Adding a PVC • oc create –f nfs-pvc.yml • oc describe pvc nfs-pvc • oc get pv • Look for the “bound” state
  • 137. Create Pod with PVC kind: Pod apiVersion: v1 metadata: name: nfs-pv-pod spec: volumes: - name: nfs-pv persistentVolumeClaim: claimName: nfs-pv-claim containers: - name: nfs-client1 image: toccoag/openshift-nginx ports: - containerPort: 8081 name: "http-server1" volumeMounts: - mountPath: "/nfsshare" name: nfs-pv resources: {} - name: nfs-client2 image: toccoag/openshift-nginx ports: - containerPort: 8082 name: "http-server2" volumeMounts: - mountPath: "/nfsshare" name: nfs-pv resources: {}