SlideShare ist ein Scribd-Unternehmen logo
1 von 38
DEPLOY LIKE A BOSS: USING
APACHE® IGNITETM AND
KUBERNETES®
DANI TRAPHAGEN
SOLUTION ARCHITECT
GRIDGAIN SYSTEMS
@DTRAPEZOID
AGENDA
• Setting up a Apache Ignite cluster
• Using the Kubernetes IP Finder and the Kubernetes Ignite
Lookup Service
• Sharing the Ignite Cluster Configuration
• Deploying your Ignite Pods
• Adjusting the Ignite Cluster Size when you need to Scale
• How to try it out with Azure!
BY THE END – THIS WILL BE YOU WILL BECOME THE…
FIRST - APACHE IGNITE’S PLATFORM
What it isn’t:
APACHE IGNITE PLATFORM (VERSION 2.1+ DEPICTED)
KUBERNETES WORKS WITH VERSIONS 1.9 AND ABOVE
Apache Ignite
Cluster
APIs to interact
with your data
Deployment
options
What it is:
APACHE IGNITE FEATURES
 Ignite Persistence
 ACID Compliance
 Complete SQL Support
 Key-Value Store/APIs
 Collocated Processing
 Scalability and Durability
SECOND – DEPLOYMENT WITH KUBERNETES (K8)
What it isn’t:
KUBERNETES IN A SIMPLE DEFINITION
 “Kubernetes intends to
radically simplify the task of
building, deploying and
maintaining distributed
systems.”
 Kubernetes: Up and Running:
Dive into the Future of
Infrastructure
 By: Kelsey Hightower
What it is:
K8 FEATURES
 Automatic Binpacking
 Horizontal Scaling
 Automated Rollouts and Rollbacks
 Storage Orchestration
 Self-Healing
 Service Discovery and Load Balancing
 Secret and Configuration Management
 Batch Execution
APACHE IGNITE + K8 CLUSTER ARCHITECTURE
https://kubernetes.io/docs/admin/high-availability/
BENEFITS OF K8 (VERSION 1.7)
 Cost Efficiency
 Use of containers by multiple developers ensures shared
resourcing rather than redundancy
 High Availability and Performance
 Self-healing: Deploying with K8 and setting rules for a
set number of nodes, will ensure your cluster can always
handle the transactions hitting it
 No more pager duty
 Get home to your pet chinchilla sooner
SETTING UP AN APACHE IGNITE CLUSTER
 Step 1:
 What is your use case?
 Seriously, what is your use
case?
 I’m not kidding. Use case, then
set up…
 Cart before horse please.
SETTING UP AN APACHE IGNITE CLUSTER
 1. Download Apache Ignite
 2. Make sure to add the ignite-
kubernetes Maven Dependency to
your pom.xml
 3. Try using examples from our
docs.
SETTING UP K8 ON A LOCAL MACHINE
 Install K8 where you intend
 AWS
 Google Cloud
 Dev Machine
 Set your $PATH w/K8
 Install Kubectl
 Follow K8 Docs
KUBERNETES DISCOVERY
 Multicast = ☹
 Use Static IP Finder & list Ignite IPs, K8
will dynamically assign them
 You can use other cloud Ignite IP
finders but you need K8 to running
in the cloud env.
 What’s the point of the
TcpDiscoveryKubernetesIpFinder?
USING THE KUBERNETES IP FINDER AND THE KUBERNETES IGNITE
LOOKUP SERVICE
 Apps & nodes running outside of K8 & Ignite will not be able to reach the
cluster
 K8 service should be deployed before Ignite cluster boot
 The Ignite Pods internal IPs will be maintained by the K8 service.
 Service name must be equal to setServiceName(String)
 This will be `ignite` as a default
https://ignite.apache.org/releases/latest/javado
c/org/apache/ignite/spi/discovery/tcp/ipfinder/
kubernetes/TcpDiscoveryKubernetesIpFinder.ht
ml
USING DOCKER ON LOCAL MACHINE WITH MINIKUBE
SHARING THE IGNITE CLUSTER CONFIGURATION
SERVICE STARTUP & SHARING CONFIGS
 Minikube - Two basic commands
 minikube start
 minikube dashboard
 Kubectl
 kubectl create -f ~/<path-to-project>/<project-
name>/config/ignite-service.yaml
NOW KUBERNETES IS RUNNING!
CONFIGURING YOUR IGNITE PODS
 2 Things Needed!
 1. Apache Ignite Configuration File with the
Kubernetes IP Finder
 2. YAML Configurations for the Apache Ignite
pods/nodes
 Steps:
 Create your ignite-service.yaml
 kubectl create -f ignite-service.yaml
 kubectl get svc ignite
SHARING IGNITE CLUSTER CONFIGS
 Confirm the ignite
service was created
 Make a path to the
persistence volume
docker will use to pass
the kubernetes config
‘example-kube.xml’
SUCCESS
Now, do this for the
ignite-volume-
claim.yaml
PERSISTENT VOLUME BOUND?
 Make sure your
persistent volume is
bound to the claim
 kubectl get pvc
ignite-volume-claim
 kubectl get pv ignite
volume
use xhyve driver so you can access persistent volumes
minikube start --vm-driver=xhyve
& modify the directory path "/data/ignite" that is used in
the examples to "/Users/<username>/data/ignite"
DEPLOYING YOUR IGNITE PODS
 Now it’s time to launch
 kubectl create -f ignite-
deployment.yaml
 kubectl get pods
 Get the logs and examine for each
cluster…ex)
 kubectl logs ignite-cluster-3454482164-
d4m6g
 Scale out:
 kubectl scale --replicas=5 -f ignite-
deployment.yaml
MANY DEPLOYMENT OPTIONS
 On Premise
 Cloud
 Azure
 EC2
 Google Cloud
USING MINIKUBE FOR LOCAL DEV
 A good place to start for exploration
 When you want to get in the cloud - pick your
poison
 Post on setup w/Azure by Ignite PMC Denis
Magda:
 https://dzone.com/articles/deploying-apache-ignite-
in-kubernetes-on-microsoft
 https://kubernetes.io/docs/setup/pick-right-
solution/#turnkey-cloud-solutions
DARE TO TRY?
1. DEPLOY CLOUD ENVIRONMENT
2. CONNECT TO YOUR CLOUD ENVIRONMENT FROM YOUR
LOCAL MACHINE, EX) AZURE
#connect to cluster, make sure your sshkeys are setup
az acs kubernetes get-credentials --resource-
group=myResourceGroup --name=myK8sCluster
#make sure you see the k8s-agents & master
kubectl get nodes
USING THE DASHBOARD
1.kubectl proxy
2.http://localhost:8001/ui
4. CREATE THE K8 LOOKUP SERVICE
#using above link, next to the number 4, create the file then initiate the service
kubectl create -f ignite-service.yaml
#you will see that the ignite service is under the services tab
5. DEPLOY YOUR APACHE IGNITE CLUSTER
#create the ignite-deployment.yaml file following instructions here
kubectl create -f ignite-deployment.yaml
#you will see that the ignite cluster is running in kubernetes
ADJUSTING THE IGNITE CLUSTER SIZE WHEN YOU NEED TO SCALE
•When you want to elasticly scale out your cluster with K8:
• kubectl scale –replicas=<n> -f ignite-
deployment.yaml
• Run:
• kubectl get pods
•Let’s say you want 5 nodes?
• kubectl scale --replicas=5 -f
~/kubernetes_dev/azure/ignite-deployment.yaml
•You will see your cluster scale out – in this case from 2 to
5 nodes!
OVERALL STEPS
Ignite Download:
https://ignite.apache.org/download.cgi
Run Kubernetes Locally:
https://kubernetes.io/docs/getting-started-guides/minikube/
Deploy Kubernetes & Ignite
https://apacheignite.readme.io/docs/kubernetes-deployment
RESOURCES
 Denis Magda’s & Akmal Chaudhri’s blog’s on
deploying Ignite w/K8 and Azure or AWS:
 https://dzone.com/articles/deploying-apache-ignite-in-
kubernetes-on-microsoft
 https://www.gridgain.com/resources/blog/kubernetes-and-
apacher-ignitetm-deployment-aws
 Tutorials
 https://kubernetes.io/docs/tutorials/
 K8 Book by Kelsey Hightower:
 http://shop.oreilly.com/product/0636920043874.do
 Ignite Book by Shahim & others:
 https://leanpub.com/ignite
ANY QUESTIONS?
 Reach me on Twitter
@dtrapezoid
 Use #apacheignite
 Checkout
http://ignite.apache.org
Deploy like a Boss: Using Kubernetes and Apache Ignite!

Weitere ähnliche Inhalte

Was ist angesagt?

Was ist angesagt? (20)

Parquet performance tuning: the missing guide
Parquet performance tuning: the missing guideParquet performance tuning: the missing guide
Parquet performance tuning: the missing guide
 
Event-driven autoscaling through KEDA and Knative Integration | DevNation Tec...
Event-driven autoscaling through KEDA and Knative Integration | DevNation Tec...Event-driven autoscaling through KEDA and Knative Integration | DevNation Tec...
Event-driven autoscaling through KEDA and Knative Integration | DevNation Tec...
 
Introduction to Apache Kafka
Introduction to Apache KafkaIntroduction to Apache Kafka
Introduction to Apache Kafka
 
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...
A Deep Dive into Stateful Stream Processing in Structured Streaming with Tath...
 
Deploying Flink on Kubernetes - David Anderson
 Deploying Flink on Kubernetes - David Anderson Deploying Flink on Kubernetes - David Anderson
Deploying Flink on Kubernetes - David Anderson
 
Airflow Clustering and High Availability
Airflow Clustering and High AvailabilityAirflow Clustering and High Availability
Airflow Clustering and High Availability
 
Envoy and Kafka
Envoy and KafkaEnvoy and Kafka
Envoy and Kafka
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache Spark
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overview
 
How to improve ELK log pipeline performance
How to improve ELK log pipeline performanceHow to improve ELK log pipeline performance
How to improve ELK log pipeline performance
 
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIs
 
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
 
Understanding and Improving Code Generation
Understanding and Improving Code GenerationUnderstanding and Improving Code Generation
Understanding and Improving Code Generation
 
The Apache Spark File Format Ecosystem
The Apache Spark File Format EcosystemThe Apache Spark File Format Ecosystem
The Apache Spark File Format Ecosystem
 
Kafka Tutorial - basics of the Kafka streaming platform
Kafka Tutorial - basics of the Kafka streaming platformKafka Tutorial - basics of the Kafka streaming platform
Kafka Tutorial - basics of the Kafka streaming platform
 
Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...Building a fully managed stream processing platform on Flink at scale for Lin...
Building a fully managed stream processing platform on Flink at scale for Lin...
 
The Best Practice of Integrating Apache Flink with Apache Iceberg.pdf
The Best Practice of Integrating Apache Flink with Apache Iceberg.pdfThe Best Practice of Integrating Apache Flink with Apache Iceberg.pdf
The Best Practice of Integrating Apache Flink with Apache Iceberg.pdf
 
Introduction to Apache Flink - Fast and reliable big data processing
Introduction to Apache Flink - Fast and reliable big data processingIntroduction to Apache Flink - Fast and reliable big data processing
Introduction to Apache Flink - Fast and reliable big data processing
 
Thoughts on kafka capacity planning
Thoughts on kafka capacity planningThoughts on kafka capacity planning
Thoughts on kafka capacity planning
 
Introduction to Maven
Introduction to MavenIntroduction to Maven
Introduction to Maven
 

Ähnlich wie Deploy like a Boss: Using Kubernetes and Apache Ignite!

Ähnlich wie Deploy like a Boss: Using Kubernetes and Apache Ignite! (20)

Verizon k8-ignite-meetup
Verizon k8-ignite-meetupVerizon k8-ignite-meetup
Verizon k8-ignite-meetup
 
Sf k8-ignite-meetup
Sf k8-ignite-meetupSf k8-ignite-meetup
Sf k8-ignite-meetup
 
Learn kubernetes in 90 minutes
Learn kubernetes in 90 minutesLearn kubernetes in 90 minutes
Learn kubernetes in 90 minutes
 
Run K8s on Local Environment
Run K8s on Local EnvironmentRun K8s on Local Environment
Run K8s on Local Environment
 
Kubernetes Kops - Automation Night
Kubernetes Kops - Automation NightKubernetes Kops - Automation Night
Kubernetes Kops - Automation Night
 
5 Painless Demos to Get You Started with Kubernetes
5 Painless Demos to Get You Started with Kubernetes5 Painless Demos to Get You Started with Kubernetes
5 Painless Demos to Get You Started with Kubernetes
 
Spark with kubernates
Spark with kubernatesSpark with kubernates
Spark with kubernates
 
From 0 to 60 with kubernetes and istio
From 0 to 60 with kubernetes and istioFrom 0 to 60 with kubernetes and istio
From 0 to 60 with kubernetes and istio
 
Deploy High Availability Kubernetes with Kubespray
Deploy High Availability Kubernetes with KubesprayDeploy High Availability Kubernetes with Kubespray
Deploy High Availability Kubernetes with Kubespray
 
Effective Building your Platform with Kubernetes == Keep it Simple
Effective Building your Platform with Kubernetes == Keep it Simple Effective Building your Platform with Kubernetes == Keep it Simple
Effective Building your Platform with Kubernetes == Keep it Simple
 
Get you Java application ready for Kubernetes !
Get you Java application ready for Kubernetes !Get you Java application ready for Kubernetes !
Get you Java application ready for Kubernetes !
 
Hands-On Introduction to Kubernetes at LISA17
Hands-On Introduction to Kubernetes at LISA17Hands-On Introduction to Kubernetes at LISA17
Hands-On Introduction to Kubernetes at LISA17
 
Using kubernetes to lose your fear of using containers
Using kubernetes to lose your fear of using containersUsing kubernetes to lose your fear of using containers
Using kubernetes to lose your fear of using containers
 
Deploy the blockchain network using kubernetes ap is on google cloud
Deploy the blockchain network using kubernetes ap is on google cloudDeploy the blockchain network using kubernetes ap is on google cloud
Deploy the blockchain network using kubernetes ap is on google cloud
 
1. CNCF kubernetes meetup - Ondrej Sika
1. CNCF kubernetes meetup - Ondrej Sika1. CNCF kubernetes meetup - Ondrej Sika
1. CNCF kubernetes meetup - Ondrej Sika
 
Kubernetes - Using Persistent Disks with WordPress and MySQL
Kubernetes - Using Persistent Disks with WordPress and MySQLKubernetes - Using Persistent Disks with WordPress and MySQL
Kubernetes - Using Persistent Disks with WordPress and MySQL
 
K8s in 3h - Kubernetes Fundamentals Training
K8s in 3h - Kubernetes Fundamentals TrainingK8s in 3h - Kubernetes Fundamentals Training
K8s in 3h - Kubernetes Fundamentals Training
 
Setting up a kubernetes cluster on ubuntu 18.04- loves cloud
Setting up a kubernetes cluster on ubuntu 18.04- loves cloudSetting up a kubernetes cluster on ubuntu 18.04- loves cloud
Setting up a kubernetes cluster on ubuntu 18.04- loves cloud
 
CI/CD with Kubernetes, Helm & Wercker (#madScalability)
CI/CD with Kubernetes, Helm & Wercker (#madScalability)CI/CD with Kubernetes, Helm & Wercker (#madScalability)
CI/CD with Kubernetes, Helm & Wercker (#madScalability)
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes Introduction
 

Mehr von Dani Traphagen

Mehr von Dani Traphagen (7)

Kafka + Kubernetes + why you maybe should
Kafka + Kubernetes + why you maybe shouldKafka + Kubernetes + why you maybe should
Kafka + Kubernetes + why you maybe should
 
To Ksql Or Live the KStream
To Ksql Or Live the KStreamTo Ksql Or Live the KStream
To Ksql Or Live the KStream
 
Nike tech-talk-intro-to-apache-ignite
Nike tech-talk-intro-to-apache-igniteNike tech-talk-intro-to-apache-ignite
Nike tech-talk-intro-to-apache-ignite
 
The next-phase-of-distributed-systems-with-apache-ignite
The next-phase-of-distributed-systems-with-apache-igniteThe next-phase-of-distributed-systems-with-apache-ignite
The next-phase-of-distributed-systems-with-apache-ignite
 
Data Modeling with Cassandra and Time Series Data
Data Modeling with Cassandra and Time Series DataData Modeling with Cassandra and Time Series Data
Data Modeling with Cassandra and Time Series Data
 
Diving into DSE Graph
Diving into DSE Graph Diving into DSE Graph
Diving into DSE Graph
 
OSCON TALK: Becoming Friends with Cassandra and Spark
OSCON TALK: Becoming Friends with Cassandra and SparkOSCON TALK: Becoming Friends with Cassandra and Spark
OSCON TALK: Becoming Friends with Cassandra and Spark
 

Kürzlich hochgeladen

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Kürzlich hochgeladen (20)

Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Cyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfCyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdf
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 

Deploy like a Boss: Using Kubernetes and Apache Ignite!

  • 1. DEPLOY LIKE A BOSS: USING APACHE® IGNITETM AND KUBERNETES® DANI TRAPHAGEN SOLUTION ARCHITECT GRIDGAIN SYSTEMS @DTRAPEZOID
  • 2. AGENDA • Setting up a Apache Ignite cluster • Using the Kubernetes IP Finder and the Kubernetes Ignite Lookup Service • Sharing the Ignite Cluster Configuration • Deploying your Ignite Pods • Adjusting the Ignite Cluster Size when you need to Scale • How to try it out with Azure!
  • 3. BY THE END – THIS WILL BE YOU WILL BECOME THE…
  • 4. FIRST - APACHE IGNITE’S PLATFORM What it isn’t:
  • 5. APACHE IGNITE PLATFORM (VERSION 2.1+ DEPICTED) KUBERNETES WORKS WITH VERSIONS 1.9 AND ABOVE Apache Ignite Cluster APIs to interact with your data Deployment options What it is:
  • 6. APACHE IGNITE FEATURES  Ignite Persistence  ACID Compliance  Complete SQL Support  Key-Value Store/APIs  Collocated Processing  Scalability and Durability
  • 7. SECOND – DEPLOYMENT WITH KUBERNETES (K8) What it isn’t:
  • 8. KUBERNETES IN A SIMPLE DEFINITION  “Kubernetes intends to radically simplify the task of building, deploying and maintaining distributed systems.”  Kubernetes: Up and Running: Dive into the Future of Infrastructure  By: Kelsey Hightower What it is:
  • 9. K8 FEATURES  Automatic Binpacking  Horizontal Scaling  Automated Rollouts and Rollbacks  Storage Orchestration  Self-Healing  Service Discovery and Load Balancing  Secret and Configuration Management  Batch Execution
  • 10. APACHE IGNITE + K8 CLUSTER ARCHITECTURE https://kubernetes.io/docs/admin/high-availability/
  • 11. BENEFITS OF K8 (VERSION 1.7)  Cost Efficiency  Use of containers by multiple developers ensures shared resourcing rather than redundancy  High Availability and Performance  Self-healing: Deploying with K8 and setting rules for a set number of nodes, will ensure your cluster can always handle the transactions hitting it  No more pager duty  Get home to your pet chinchilla sooner
  • 12. SETTING UP AN APACHE IGNITE CLUSTER  Step 1:  What is your use case?  Seriously, what is your use case?  I’m not kidding. Use case, then set up…  Cart before horse please.
  • 13. SETTING UP AN APACHE IGNITE CLUSTER  1. Download Apache Ignite  2. Make sure to add the ignite- kubernetes Maven Dependency to your pom.xml  3. Try using examples from our docs.
  • 14. SETTING UP K8 ON A LOCAL MACHINE  Install K8 where you intend  AWS  Google Cloud  Dev Machine  Set your $PATH w/K8  Install Kubectl  Follow K8 Docs
  • 15. KUBERNETES DISCOVERY  Multicast = ☹  Use Static IP Finder & list Ignite IPs, K8 will dynamically assign them  You can use other cloud Ignite IP finders but you need K8 to running in the cloud env.  What’s the point of the TcpDiscoveryKubernetesIpFinder?
  • 16. USING THE KUBERNETES IP FINDER AND THE KUBERNETES IGNITE LOOKUP SERVICE  Apps & nodes running outside of K8 & Ignite will not be able to reach the cluster  K8 service should be deployed before Ignite cluster boot  The Ignite Pods internal IPs will be maintained by the K8 service.  Service name must be equal to setServiceName(String)  This will be `ignite` as a default https://ignite.apache.org/releases/latest/javado c/org/apache/ignite/spi/discovery/tcp/ipfinder/ kubernetes/TcpDiscoveryKubernetesIpFinder.ht ml
  • 17. USING DOCKER ON LOCAL MACHINE WITH MINIKUBE
  • 18. SHARING THE IGNITE CLUSTER CONFIGURATION
  • 19. SERVICE STARTUP & SHARING CONFIGS  Minikube - Two basic commands  minikube start  minikube dashboard  Kubectl  kubectl create -f ~/<path-to-project>/<project- name>/config/ignite-service.yaml
  • 20. NOW KUBERNETES IS RUNNING!
  • 21. CONFIGURING YOUR IGNITE PODS  2 Things Needed!  1. Apache Ignite Configuration File with the Kubernetes IP Finder  2. YAML Configurations for the Apache Ignite pods/nodes  Steps:  Create your ignite-service.yaml  kubectl create -f ignite-service.yaml  kubectl get svc ignite
  • 22. SHARING IGNITE CLUSTER CONFIGS  Confirm the ignite service was created  Make a path to the persistence volume docker will use to pass the kubernetes config ‘example-kube.xml’
  • 23. SUCCESS Now, do this for the ignite-volume- claim.yaml
  • 24. PERSISTENT VOLUME BOUND?  Make sure your persistent volume is bound to the claim  kubectl get pvc ignite-volume-claim  kubectl get pv ignite volume use xhyve driver so you can access persistent volumes minikube start --vm-driver=xhyve & modify the directory path "/data/ignite" that is used in the examples to "/Users/<username>/data/ignite"
  • 25. DEPLOYING YOUR IGNITE PODS  Now it’s time to launch  kubectl create -f ignite- deployment.yaml  kubectl get pods  Get the logs and examine for each cluster…ex)  kubectl logs ignite-cluster-3454482164- d4m6g  Scale out:  kubectl scale --replicas=5 -f ignite- deployment.yaml
  • 26. MANY DEPLOYMENT OPTIONS  On Premise  Cloud  Azure  EC2  Google Cloud
  • 27. USING MINIKUBE FOR LOCAL DEV  A good place to start for exploration  When you want to get in the cloud - pick your poison  Post on setup w/Azure by Ignite PMC Denis Magda:  https://dzone.com/articles/deploying-apache-ignite- in-kubernetes-on-microsoft  https://kubernetes.io/docs/setup/pick-right- solution/#turnkey-cloud-solutions
  • 29. 1. DEPLOY CLOUD ENVIRONMENT
  • 30. 2. CONNECT TO YOUR CLOUD ENVIRONMENT FROM YOUR LOCAL MACHINE, EX) AZURE #connect to cluster, make sure your sshkeys are setup az acs kubernetes get-credentials --resource- group=myResourceGroup --name=myK8sCluster #make sure you see the k8s-agents & master kubectl get nodes
  • 31. USING THE DASHBOARD 1.kubectl proxy 2.http://localhost:8001/ui
  • 32. 4. CREATE THE K8 LOOKUP SERVICE #using above link, next to the number 4, create the file then initiate the service kubectl create -f ignite-service.yaml #you will see that the ignite service is under the services tab
  • 33. 5. DEPLOY YOUR APACHE IGNITE CLUSTER #create the ignite-deployment.yaml file following instructions here kubectl create -f ignite-deployment.yaml #you will see that the ignite cluster is running in kubernetes
  • 34. ADJUSTING THE IGNITE CLUSTER SIZE WHEN YOU NEED TO SCALE •When you want to elasticly scale out your cluster with K8: • kubectl scale –replicas=<n> -f ignite- deployment.yaml • Run: • kubectl get pods •Let’s say you want 5 nodes? • kubectl scale --replicas=5 -f ~/kubernetes_dev/azure/ignite-deployment.yaml •You will see your cluster scale out – in this case from 2 to 5 nodes!
  • 35. OVERALL STEPS Ignite Download: https://ignite.apache.org/download.cgi Run Kubernetes Locally: https://kubernetes.io/docs/getting-started-guides/minikube/ Deploy Kubernetes & Ignite https://apacheignite.readme.io/docs/kubernetes-deployment
  • 36. RESOURCES  Denis Magda’s & Akmal Chaudhri’s blog’s on deploying Ignite w/K8 and Azure or AWS:  https://dzone.com/articles/deploying-apache-ignite-in- kubernetes-on-microsoft  https://www.gridgain.com/resources/blog/kubernetes-and- apacher-ignitetm-deployment-aws  Tutorials  https://kubernetes.io/docs/tutorials/  K8 Book by Kelsey Hightower:  http://shop.oreilly.com/product/0636920043874.do  Ignite Book by Shahim & others:  https://leanpub.com/ignite
  • 37. ANY QUESTIONS?  Reach me on Twitter @dtrapezoid  Use #apacheignite  Checkout http://ignite.apache.org

Hinweis der Redaktion

  1. Durable Memory Ignite's durable memory component treats RAM not just as a caching layer but as a complete fully functional storage layer. This means that users can turn the persistence on and off as needed. If the persistence is off, then Ignite can act as a distributed in-memory database or in-memory data grid, depending on whether you prefer to use SQL or key-value APIs. If the persistence is turned on, then Ignite becomes a distributed, horizontally scalable database that guarantees full data consistency and is resilient to full cluster failures. Ignite Persistence Ignite native persistence is a distributed, strongly consistent disk store that transparently integrates with Ignite's durable memory. ACID Compliance Data stored in Ignite is ACID-compliant both in memory and on disk, making Ignite a strongly consistent system. Ignite transactions work across the network and can span multiple servers. Complete SQL Support Ignite provides full support for SQL, DDL and DML, allowing users to interact with Ignite using pure SQL without writing any code. This means that users can create tables and indexes as well as insert, update, and query data using only SQL. Having such complete SQL support makes Ignite a one-of-a-kind distributed SQL database. Key-Value The in-memory data grid component in Ignite is a fully transactional distributed key-value store that can scale horizontally across 100s of servers in the cluster. When persistence is enabled, Ignite can also store more data than fits in memory and survive full cluster restarts. Collocated Processing Most traditional databases work in a client-server fashion, meaning that data must be brought to the client side for processing. This approach requires lots of data movement from servers to clients and generally does not scale. Ignite, on the other hand, allows for sending light-weight computations to the data, i.e. collocating computations with data. As a result, Ignite scales better and minimizes data movement. Scalability and Durability Ignite is an elastic, horizontally scalable distributed system that supports adding and removing cluster nodes on demand. Ignite also allows for storing multiple copies of the data, making it resilient to partial cluster failures. If the persistence is enabled, then data stored in Ignite will also survive full cluster failures. Cluster restarts in Ignite can be very fast, as the data becomes operational instantaneously directly from disk. As a result, the data does not need to be preloaded in-memory to begin processing, and Ignite caches will lazily warm up resuming the in memory performance.
  2. Ignite Persistence Ignite native persistence is a distributed, strongly consistent disk store that transparently integrates with Ignite's durable memory. ACID Compliance Data stored in Ignite is ACID-compliant both in memory and on disk, making Ignite a strongly consistent system. Ignite transactions work across the network and can span multiple servers. Complete SQL Support Ignite provides full support for SQL, DDL and DML, allowing users to interact with Ignite using pure SQL without writing any code. This means that users can create tables and indexes as well as insert, update, and query data using only SQL. Having such complete SQL support makes Ignite a one-of-a-kind distributed SQL database. Key-Value The in-memory data grid component in Ignite is a fully transactional distributed key-value store that can scale horizontally across 100s of servers in the cluster. When persistence is enabled, Ignite can also store more data than fits in memory and survive full cluster restarts. Collocated Processing Most traditional databases work in a client-server fashion, meaning that data must be brought to the client side for processing. This approach requires lots of data movement from servers to clients and generally does not scale. Ignite, on the other hand, allows for sending light-weight computations to the data, i.e. collocating computations with data. As a result, Ignite scales better and minimizes data movement. Scalability and Durability Ignite is an elastic, horizontally scalable distributed system that supports adding and removing cluster nodes on demand. Ignite also allows for storing multiple copies of the data, making it resilient to partial cluster failures. If the persistence is enabled, then data stored in Ignite will also survive full cluster failures. Cluster restarts in Ignite can be very fast, as the data becomes operational instantaneously directly from disk. As a result, the data does not need to be preloaded in-memory to begin processing, and Ignite caches will lazily warm up resuming the in memory performance.
  3. The name Kubernetes originates from Greek, meaning helmsman or pilot, and is the root of governor and cybernetic. K8s is an abbreviation derived by replacing the 8 letters “ubernete” with “8”.
  4. Startes w/Need for Containerization, VMs are clunky, not very portable and require the OS package manager – Containers are small, fast, portable and use OS-level virtualization. Through Docker they have become popular since 2013, linux containers have been used as a packaging mechanism for distributed applications in a microservices style architecture and run CI/CD. Docker is dev centric, allowing for shared images including your code, binaries, configs…etc. Developers and operators generally love containers. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Automatic binpacking Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources. Horizontal scaling Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage. – Horizontal Pod Autoscaler – sets the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or with custom metrics if desired) Automated rollouts and rollbacks Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions. Storage orchestration Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker. Self-healing Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve. Service discovery and load balancing No need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives containers their own IP addresses and a single DNS name for a set of containers, and can load-balance across them. Secret and configurationmanagement Deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration. Batch execution In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.
  5. Automatic binpacking Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources. Horizontal scaling Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage. – Horizontal Pod Autoscaler – sets the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or with custom metrics if desired) Automated rollouts and rollbacks Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions. Storage orchestration Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker. Self-healing Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve. Service discovery and load balancing No need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives containers their own IP addresses and a single DNS name for a set of containers, and can load-balance across them. Secret and configurationmanagement Deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration. Batch execution In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.
  6. The Kubernetes Master is a collection of three processes that run on a single node in your cluster, which is designated as the master node. Those processes are: kube-apiserver, kube-controller-manager and kube-scheduler. Each individual non-master node in your cluster runs two processes: kubelet, which communicates with the Kubernetes Master. kube-proxy, a network proxy which reflects Kubernetes networking services on each node.
  7. The IP finder automatically lookups IP addresses of all the alive Ignite pods by talking to a distinct Kubernetes service that keeps all the endpoints up to date.
  8. Discovery SPI implementation that uses TCP/IP for node discovery. Nodes are organized in ring. So almost all network exchange (except few cases) is done across it. service will maintain a list of all endpoints (internal IP addresses) of all containerized Ignite pods running so far. The name of the service must be equal to setServiceName(String) which is `ignite` by default. public class TcpDiscoveryKubernetesIpFinder extends TcpDiscoveryIpFinderAdapterIP finder for automatic lookup of Ignite nodes running in Kubernetes environment. All Ignite nodes have to deployed as Kubernetes pods in order to be discovered. An application that uses Ignite client nodes as a gateway to the cluster is required to be containerized as well. Applications and Ignite nodes running outside of Kubernetes will not be able to reach the containerized counterparts.The implementation is based on a distinct Kubernetes service that has to be created and should be deployed prior Ignite nodes startup. The service will maintain a list of all endpoints (internal IP addresses) of all containerized Ignite pods running so far. The name of the service must be equal to setServiceName(String) which is `ignite` by default. As for Ignite pods, it's recommended to label them in such a way that the service will use the label in its selector configuration excluding endpoints of irrelevant Kubernetes pods running in parallel. The IP finder, in its turn, will call this service to retrieve Ignite pods IP addresses. The port will be either the one that is set with TcpDiscoverySpi.setLocalPort(int) or TcpDiscoverySpi.DFLT_PORT. Make sure that all Ignite pods occupy a similar discovery port, otherwise they will not be able to discover each other using this IP finder. Optional configuration The Kubernetes service name for IP addresses lookup (see setServiceName(String)) The Kubernetes service namespace for IP addresses lookup (see setNamespace(String) The host name of the Kubernetes API server (see setMasterUrl(String)) Path to the service token (see setAccountToken(String) Both registerAddresses(Collection) and unregisterAddresses(Collection) have no effect. Note, this IP finder is only workable when it used in Kubernetes environment. Choose another implementation of TcpDiscoveryIpFinder for local or home network tests.
  9. Kube-cuddle