SlideShare ist ein Scribd-Unternehmen logo
1 von 66
Downloaden Sie, um offline zu lesen
Deploying MongoDB
sharded clusters easily
with Terraform and
Ansible
All Things Open, October 2021
Ivan Groenewold
Agenda
● Terraform 101
● Provisioning in GCP
● Ansible 101
● Deploying MongoDB
● Q&A
About me
● @igroenew
● Architect at Percona
● Based in Argentina
MongoDB sharding in a nutshell
Image © MongoDB Inc.
Target infrastructure
The plan
● Define the topology
● Provision the infrastructure using Terraform
○ instances, disks, network, buckets, etc.
● Install the software with Ansible
○ MongoDB, monitoring & backup solution
Terraform 101
● Infrastructure-as-Code
● Open Source
● Works with multiple resources and providers
● Declarative approach - state what you want
● Infrastructure converges to the desired state
Terraform syntax
● Based in HashiCorp Configuration Language (HCL)
● Basic constructs:
○ Arguments
name = “my_instance”
○ Blocks
resource “google_compute_instance” “my_instance”
{
…
}
type label 1 label 2
body
Defining variables
variable "data_disk_type" {
default = "pd-standard"
}
variable "my_instance_type" {
default = "e2-standard-2"
description = "instance type"
}
variable "my_volume_size" {
default = "100"
description = "storage size"
}
variable "centos_amis" {
description = "CentOS AMIs on each region"
default = {
northamerica-northeast1 = "centos-8-v20210316"
northamerica-northeast2 = "centos-8-v20210316"
}
}
Provisioning in GCP
resource "google_compute_disk" "cfg_disk" {
name = "mongo-cfg0-data"
type = var.data_disk_type
size = var.my_volume_size
zone = var.my_zone
}
resource "google_compute_instance" "cfg" {
name = "my_instance"
machine_type = var.my_instance_type
tags = ["mongodb-cfg"]
zone = var.my_zone
boot_disk {
initialize_params {
image = lookup(var.centos_amis, var.region)
}
}
attached_disk {
source = google_compute_disk.cfg_disk.name
}
network_interface {
network = google_compute_network.vpc-network.id
subnetwork = google_compute_subnetwork.vpc-subnet.id
}
provision a disk
provision an instance
Provisioning in GCP (2)
resource "google_compute_disk" "cfg_disk" {
name = "mongo-cfg0-data"
type = var.data_disk_type
size = var.my_volume_size
zone = var.my_zone
}
resource "google_compute_instance" "cfg" {
name = "my_instance"
machine_type = var.my_instance_type
tags = ["mongodb-cfg"]
zone = var.my_zone
boot_disk {
initialize_params {
image = lookup(var.centos_amis, var.region)
}
}
attached_disk {
source = google_compute_disk.cfg_disk.name
}
network_interface {
network = google_compute_network.vpc-network.id
subnetwork = google_compute_subnetwork.vpc-subnet.id
}
nested blocks
call lookup function
Working with Terraform
● terraform init
○ Initialize the working directory
● terraform plan
○ print the action plan
● terraform apply
○ carry out the actions
● terraform destroy
○ remove all managed resources
Working with Terraform (2)
Working with Terraform (3)
● What is a MongoDB server?
○ Instance + Persistent disk (except mongos servers)
○ Firewall rules
○ Init scripts
■ mount the volumes, OS tweaks, etc
Working with Terraform (4)
● Create .tf files for each component
■ mongos router
■ mongod shard
■ Config server
■ anything else?
● Use a separate variables file
Provisioning the infrastructure
● Servers
○ cfg-server.tf
○ shard-server.tf
○ mongos-server.tf
○ pmm-server.tf
● variables.tf
● network.tf
● backup.tf
Configuring the network
● Define the region
● Configure a VPC
● Define the subnets
Configuring the network (2)
data "google_compute_zones" "available" {
status = "UP"
}
resource "google_compute_network" " vpc-network" {
name = "my-vpc"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "vpc-subnet" {
name = "mongodb-subnet"
ip_cidr_range = "10.1.0.0/16"
region = var.region
network = google_compute_network. vpc-network.id
}
query data source
Creating the instances
resource "google_compute_instance" "server" {
count = 6
name = "server ${count.index}"
zone = data.google_compute_zones.available.names[ count.index % 3]
● Use count.index to shuffle the instances between AZ’s
Configuring network access
resource "google_compute_firewall" "mongodb-cfgsvr-firewall" {
name = "mongodb-cfgsvr-firewall"
network = google_compute_network.vpc-network.name
direction = "INGRESS"
target_tags = ["mongodb-cfg"]
allow {
protocol = "tcp"
ports = ["22", "27019"]
}
}
Preparing the backup infrastructure
● Create a Cloud Storage bucket
● Allow the instances to read/write from it
● Objects lifecycle policy
Preparing the backup infrastructure (2)
● Steps are cloud-specific
● For GCP we need:
○ Cloud Storage bucket
○ Service account
○ HMAC key-pair for the service account
○ Grant storage-admin role to the service account
Preparing the backup infrastructure (3)
resource "google_storage_bucket" "mongo-backups" {
name = "mongo-backups"
location = var.region
force_destroy = true
uniform_bucket_level_access = true
resource "google_service_account" "mongo-backup-service-account" {
account_id = "mongo-backup-service-account"
display_name = "Mongo Backup Service Account"
}
resource "google_storage_hmac_key" "mongo-backup-service-account" {
service_account_email = google_service_account.mongo-backup-service-account.email
}
resource "google_storage_bucket_iam_binding" "binding" {
bucket = google_storage_bucket.mongo-backups.name
role = "roles/storage.admin"
members = [
"serviceAccount:${google_service_account.mongo-backup-service-account.email}",
]
}
● PMM client
○ run locally on each server
○ pushes metrics
● PMM server
○ Performance metrics history
○ Query analytics
○ Integrated alerting
○ Integrated backups (WIP)
https://pmmdemo.percona.com
Monitoring
What’s next?
● We have the servers
● We have the network configured
● We have the backup infrastructure
● We need to deploy the software
Ansible 101
● Automation engine
● SSH-based
● Open source
● Web interface: AWX project
Why Ansible?
● Easy to deploy
● No agent required
● No firewall rules required
● YAML syntax
● Secure
Installing Ansible
● Control machine
○ Can be your laptop
○ Acts as the Ansible “server”
○ Only needed when running Ansible code
● Managed nodes
Inventory
● Inventory options
○ Static
■ ini or YML format
○ Dynamic
■ Scripts available for most cloud providers
■ Write your own plugin
● The default inventory is /etc/ansible/hosts
Inventory (2)
● Static inventory example
[webservers]
www.myhost.com
www.example.com
[databases]
db-[a:f].example.com
[atlanta]
dba.example.com http_port=80 maxRequestsPerChild=808
[atlanta:vars]
ntp_server=ntp.atlanta.example.com
Modules
● Ansible building blocks
● Should be idempotent
Examples:
$ ansible example -m ping
www.example.com | SUCCESS => {
"changed": false,
"ping": "pong"
}
$ ansible example -m service -a "name=httpd state=started"
Playbooks
● Orchestrate steps
● Composed of one or more plays
● Each play runs a number of tasks in order on a group of servers
○ e.g. call a module to do something
● YML format
Playbooks (2)
● Inventory example:
[webservers]
web[01:10].example.com
[databases]
db[01:10].example.com
---
- hosts: webservers
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
- name: ensure apache is started
service:
name: httpd
state: started
- hosts: databases
tasks:
- name: ensure postgresql is at the latest version
yum:
name: postgresql
state: latest
play 1
play 2
task 1
task 2
● Playbook example:
Playbooks (3)
Playbooks (4)
Play 1:
---
- hosts: webservers
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
...
module
host groups as per inventory
Playbooks (5)
● Run with ansible-playbook command
$ ansible-playbook all my_pb.yml [--limit *example.com]
Playbooks (6)
PLAY [all]
***************************************************************************
TASK [check if specified os user exists]
***************************************************************************
changed: [mysql1]
ok: [mysql2]
PLAY RECAP
***************************************************************************
mysql1 : ok=1 changed=1 unreachable=0 failed=0
mysql2 : ok=1 changed=0 unreachable=0 failed=0
Variables
● Simple variables
foo: bar
● List
datacenter:
- us-east
- us-west
● Dictionary
foo:
field1: one
field2: two
Automating MongoDB deployment
1. Create an Ansible inventory file
2. Edit the variables file
3. Run the ansible-playbook
Automating MongoDB deployment
1. Create an Ansible inventory file
2. Edit the variables file
3. Run the ansible-playbook
Inventory file for a sharded cluster
● One group per shard (“shardN”)
● A group for the config servers (“cfg”)
● A group for the routers (“mongos”)
[shard1]
host1.example.com mongodb_primary=True
host2.example.com
host3.example.com
[shard2]
host4.example.com mongodb_primary=True
host5.example.com
host6.example.com
[cfg]
host7.example.com mongodb_primary=True
host8.example.com
host9.example.com
[mongos]
host10.example.com
Inventory file for a sharded cluster (2)
Generating the inventory file with Terraform
● Use the local_file Terraform resource
● Use templates to dynamically create the groups
● How to generate an Ansible inventory from Terraform
Automating MongoDB deployment
1. Create an Ansible inventory file
2. Edit the variables file
3. Run the ansible-playbook
Variables
● Copy the MongoDB files / Install from repository
● Ports for mongod, mongos
● Define the paths for data, logs, etc.
● Authentication mechanism
● Encryption
● Backup
Things we need done
● Install packages
● Create config files
● Start/stop processes
● Initialize replica sets
● Create users
● Configure backup job
● Add hosts to monitoring
● Add the shards to the cluster
Installing packages
packages:
- percona-server-mongodb
- percona-backup-mongodb
- pmm2-client
- name: install rpm from repo
package:
name: "{{ item }}"
state: present
with_items: "{{ packages }}"
dynamic variable
loop
Creating configuration files
● Generate files dynamically
● Include/exclude different sections
● Variables are not enough
● Solution: Ansible Templates
Ansible Templates
● Built-in module
● Create file with dynamic content
● Jinja2 engine
● Store them in /templates subdirectory
Creating templated config files
mongod.conf.j2 template:
...
security:
{% if use_tls %}
clusterAuthMode: x509
{% else %}
keyFile: {{ keyFile_loc }}
{% endif %}
...
Variables file:
use_tls: false
keyFile_loc: /var/lib/mongo/rskeyfile
Creating templated config files (2)
Task:
- name: copy mongod.conf
become: yes
template:
src: templates/mongod.conf.j2
dest: /etc/mongod.conf
owner: root
group: root
mode: 0644
Starting/stopping processes
- name: start mongod on rs member
become: yes
service:
name: mongod
state: started
[cfg]
host1.example.com mongodb_primary=True
host2.example.com
host3.example.com
[shard1]
host4.example.com mongodb_primary=True
host5.example.com
host6.example.com
[shard2]
host6.example.com mongodb_primary=True
host8.example.com
host9.example.com
[mongos]
host10.example.com
Initialize replica sets
● rs.initiate()
Our inventory file:
group_names array
for “host1.example.com”:
group_names = [ cfg ]
Initialize replica sets (2)
init-rs.js.j2:
rs.initiate(
{
_id: "{{ group_names[0] }}",
members: [
{% for h in groups[ group_names[0] ] %}
{ _id : {{ loop.index }}, host : "{{ h }}:
{%
if hostvars[inventory_hostname].group_names[0].startswith('shard') %}
{{ shard_port }}
{% else %}
{{ cfgserver_port }}
{% endif %}",
priority: 1 } {% if not loop.last %}
,{% endif %}
{% endfor %}
] });
the first group a host appears in
all hosts part of the first group
Initialize replica sets (3)
init-rs.js:
rs.initiate(
{
_id: "cfg",
members: [
{ _id : 0 , host : ”host1.example.com:
27018
",
priority: 1 } ,
...
] });
Initialize replica sets (4)
- name: render the template for the init command
template:
src: templates/init-rs.js.j2
dest: /tmp/init-rs.js
mode: 0644
when: mongodb_primary is defined and mongodb_primary
- name: run the init command for the replica set
shell: mongo --host localhost --port {{ mongo_port }} <
/tmp/init-rs.js
when: mongodb_primary is defined and mongodb_primary
runs only once per replica-set
Create users
createUser.j2:
db.getSiblingDB("admin").createUser({
user: "{{ mongodb_pmm_user }}",
pwd: "{{ mongodb_pmm_user_pwd }}",
roles: [
{ role: "explainRole", db: "admin" },
{ role: "clusterMonitor", db: "admin" },
{ role: "read", db: "local" }
]
});
Create users (2)
- name: prepare the command to create pmm user
template:
src: templates/createUser.js.j2
dest: /tmp/createUser.js
mode: 0644
when: mongodb_primary is defined and mongodb_primary
- name: run the command to create the user
shell: mongo admin -u {{ root_user }} -p{{ mongo_root_password }}
--port {{ mongo_port }} < /tmp/createUser.js
when: mongodb_primary is defined and mongodb_primary
Configure backup
- name: set up backup cron job
cron:
name: pbm backup
minute: 3
hour: 0
user: pbm
job: /usr/bin/pbm backup --mongodb-uri "mongodb://{{ pbmuser }}:{{ pbmpwd }}@
{{ ansible_fqdn }}:{{ mongo_port }}"
cron_file: pbm_daily_backup
Configure monitoring
- name: point pmm-client to the PMM server
become: true
shell: pmm-admin config --server-url=https://{{ pmm_server_user }}:
{{ pmm_server_pwd }}@{{ pmm_server }}:443 --server-insecure-tls --force
- name: add mongodb metrics exporter
become: true
shell: pmm-admin add mongodb --username={{ mongodb_pmm_user }} --password={{
mongodb_pmm_user_pwd }} --host={{ ansible_fqdn }} --port={{ cfg_server_port
if
('cfg' in group_names) else shard_port }}
Add the shards
- name: add the shards
hosts: shard*
tasks:
- name: add the shards to the cluster
shell: mongo admin -uroot -p{{ mongo_root_password }} --port {{ mongos_port }}
--eval "sh.addShard('{{ group_names[0] }}/{{ ansible_fqdn }}:{{ shard_port }}')"
delegate_to: "{{ groups.mongos | first }}"
when: mongodb_primary is defined and mongodb_primary
Automating MongoDB deployment
1. Create an Ansible inventory file
2. Edit the variables file
3. Run the ansible-playbook
ansible-playbook main.yml -i inventory.ini --ask-become-pass
Putting it all together
● Define the topology
● Create the infrastructure using Terraform
● Generate the inventory file for Ansible
● Install the software with Ansible
Putting it all together (2)
● Define the variables
○ variables.tf
○ Ansible vars file
● Run terraform apply
● Run ansible-playbook
Benefits
● Define a process
● Save time
● Reuse code
● Streamline deployments
● Ensure resources are monitored (and backed up)
Q&A
Thank you for attending!
https://www.percona.com/blog/author/ivan-groenewold/

Weitere ähnliche Inhalte

Was ist angesagt?

Streaming Design Patterns Using Alpakka Kafka Connector (Sean Glover, Lightbe...
Streaming Design Patterns Using Alpakka Kafka Connector (Sean Glover, Lightbe...Streaming Design Patterns Using Alpakka Kafka Connector (Sean Glover, Lightbe...
Streaming Design Patterns Using Alpakka Kafka Connector (Sean Glover, Lightbe...
confluent
 

Was ist angesagt? (20)

Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
Wars of MySQL Cluster ( InnoDB Cluster VS Galera ) Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
 
MongoDB at Scale
MongoDB at ScaleMongoDB at Scale
MongoDB at Scale
 
New features in ProxySQL 2.0 (updated to 2.0.9) by Rene Cannao (ProxySQL)
New features in ProxySQL 2.0 (updated to 2.0.9) by Rene Cannao (ProxySQL)New features in ProxySQL 2.0 (updated to 2.0.9) by Rene Cannao (ProxySQL)
New features in ProxySQL 2.0 (updated to 2.0.9) by Rene Cannao (ProxySQL)
 
[오픈소스컨설팅]이기종 WAS 클러스터링 솔루션- Athena Dolly
[오픈소스컨설팅]이기종 WAS 클러스터링 솔루션- Athena Dolly[오픈소스컨설팅]이기종 WAS 클러스터링 솔루션- Athena Dolly
[오픈소스컨설팅]이기종 WAS 클러스터링 솔루션- Athena Dolly
 
Streaming Design Patterns Using Alpakka Kafka Connector (Sean Glover, Lightbe...
Streaming Design Patterns Using Alpakka Kafka Connector (Sean Glover, Lightbe...Streaming Design Patterns Using Alpakka Kafka Connector (Sean Glover, Lightbe...
Streaming Design Patterns Using Alpakka Kafka Connector (Sean Glover, Lightbe...
 
InnoDB Internal
InnoDB InternalInnoDB Internal
InnoDB Internal
 
MongoDB on EC2 and EBS
MongoDB on EC2 and EBSMongoDB on EC2 and EBS
MongoDB on EC2 and EBS
 
[pgday.Seoul 2022] PostgreSQL with Google Cloud
[pgday.Seoul 2022] PostgreSQL with Google Cloud[pgday.Seoul 2022] PostgreSQL with Google Cloud
[pgday.Seoul 2022] PostgreSQL with Google Cloud
 
Automating a PostgreSQL High Availability Architecture with Ansible
Automating a PostgreSQL High Availability Architecture with AnsibleAutomating a PostgreSQL High Availability Architecture with Ansible
Automating a PostgreSQL High Availability Architecture with Ansible
 
Postgresql tutorial
Postgresql tutorialPostgresql tutorial
Postgresql tutorial
 
Cost-based Query Optimization in Apache Phoenix using Apache Calcite
Cost-based Query Optimization in Apache Phoenix using Apache CalciteCost-based Query Optimization in Apache Phoenix using Apache Calcite
Cost-based Query Optimization in Apache Phoenix using Apache Calcite
 
Errant GTIDs breaking replication @ Percona Live 2019
Errant GTIDs breaking replication @ Percona Live 2019Errant GTIDs breaking replication @ Percona Live 2019
Errant GTIDs breaking replication @ Percona Live 2019
 
Monitoring, Logging and Tracing on Kubernetes
Monitoring, Logging and Tracing on KubernetesMonitoring, Logging and Tracing on Kubernetes
Monitoring, Logging and Tracing on Kubernetes
 
Keepalived+MaxScale+MariaDB_운영매뉴얼_1.0.docx
Keepalived+MaxScale+MariaDB_운영매뉴얼_1.0.docxKeepalived+MaxScale+MariaDB_운영매뉴얼_1.0.docx
Keepalived+MaxScale+MariaDB_운영매뉴얼_1.0.docx
 
Introduction to Globus for System Administrators
Introduction to Globus for System AdministratorsIntroduction to Globus for System Administrators
Introduction to Globus for System Administrators
 
BPF Internals (eBPF)
BPF Internals (eBPF)BPF Internals (eBPF)
BPF Internals (eBPF)
 
Integrating Applications: the Reactive Way
Integrating Applications: the Reactive WayIntegrating Applications: the Reactive Way
Integrating Applications: the Reactive Way
 
Intro ProxySQL
Intro ProxySQLIntro ProxySQL
Intro ProxySQL
 
Tuning kafka pipelines
Tuning kafka pipelinesTuning kafka pipelines
Tuning kafka pipelines
 
Apache Spark Streaming in K8s with ArgoCD & Spark Operator
Apache Spark Streaming in K8s with ArgoCD & Spark OperatorApache Spark Streaming in K8s with ArgoCD & Spark Operator
Apache Spark Streaming in K8s with ArgoCD & Spark Operator
 

Ähnlich wie Deploying MongoDB sharded clusters easily with Terraform and Ansible

A3Sec Advanced Deployment System
A3Sec Advanced Deployment SystemA3Sec Advanced Deployment System
A3Sec Advanced Deployment System
a3sec
 
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Docker, Inc.
 

Ähnlich wie Deploying MongoDB sharded clusters easily with Terraform and Ansible (20)

Troubleshooting MySQL from a MySQL Developer Perspective
Troubleshooting MySQL from a MySQL Developer PerspectiveTroubleshooting MySQL from a MySQL Developer Perspective
Troubleshooting MySQL from a MySQL Developer Perspective
 
Ansiblefest 2018 Network automation journey at roblox
Ansiblefest 2018 Network automation journey at robloxAnsiblefest 2018 Network automation journey at roblox
Ansiblefest 2018 Network automation journey at roblox
 
A3Sec Advanced Deployment System
A3Sec Advanced Deployment SystemA3Sec Advanced Deployment System
A3Sec Advanced Deployment System
 
Designate Install and Operate Workshop
Designate Install and Operate WorkshopDesignate Install and Operate Workshop
Designate Install and Operate Workshop
 
Introduction to ansible
Introduction to ansibleIntroduction to ansible
Introduction to ansible
 
The Accidental DBA
The Accidental DBAThe Accidental DBA
The Accidental DBA
 
MongoDB: Advantages of an Open Source NoSQL Database
MongoDB: Advantages of an Open Source NoSQL DatabaseMongoDB: Advantages of an Open Source NoSQL Database
MongoDB: Advantages of an Open Source NoSQL Database
 
Automating with ansible (Part A)
Automating with ansible (Part A)Automating with ansible (Part A)
Automating with ansible (Part A)
 
Automating with ansible (part a)
Automating with ansible (part a)Automating with ansible (part a)
Automating with ansible (part a)
 
XPDDS17: NoXS: Death to the XenStore - Filipe Manco, NEC
XPDDS17:  NoXS: Death to the XenStore - Filipe Manco, NECXPDDS17:  NoXS: Death to the XenStore - Filipe Manco, NEC
XPDDS17: NoXS: Death to the XenStore - Filipe Manco, NEC
 
6 Months Sailing with Docker in Production
6 Months Sailing with Docker in Production 6 Months Sailing with Docker in Production
6 Months Sailing with Docker in Production
 
Automating complex infrastructures with Puppet
Automating complex infrastructures with PuppetAutomating complex infrastructures with Puppet
Automating complex infrastructures with Puppet
 
Introduction of unit test on android kernel
Introduction of unit test on android kernelIntroduction of unit test on android kernel
Introduction of unit test on android kernel
 
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json  postgre-sql vs. mongodbPGConf APAC 2018 - High performance json  postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
 
Network Automation: Ansible 101
Network Automation: Ansible 101Network Automation: Ansible 101
Network Automation: Ansible 101
 
Linux device drivers
Linux device drivers Linux device drivers
Linux device drivers
 
OSMC 2021 | pg_stat_monitor: A cool extension for better database (PostgreSQL...
OSMC 2021 | pg_stat_monitor: A cool extension for better database (PostgreSQL...OSMC 2021 | pg_stat_monitor: A cool extension for better database (PostgreSQL...
OSMC 2021 | pg_stat_monitor: A cool extension for better database (PostgreSQL...
 
Lightweight Virtualization with Linux Containers and Docker | YaC 2013
Lightweight Virtualization with Linux Containers and Docker | YaC 2013Lightweight Virtualization with Linux Containers and Docker | YaC 2013
Lightweight Virtualization with Linux Containers and Docker | YaC 2013
 
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013Lightweight Virtualization with Linux Containers and Docker I YaC 2013
Lightweight Virtualization with Linux Containers and Docker I YaC 2013
 
Cobbler, Func and Puppet: Tools for Large Scale Environments
Cobbler, Func and Puppet: Tools for Large Scale EnvironmentsCobbler, Func and Puppet: Tools for Large Scale Environments
Cobbler, Func and Puppet: Tools for Large Scale Environments
 

Mehr von All Things Open

Open Source and Public Policy
Open Source and Public PolicyOpen Source and Public Policy
Open Source and Public Policy
All Things Open
 
Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...
Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...
Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...
All Things Open
 
How to Write & Deploy a Smart Contract
How to Write & Deploy a Smart ContractHow to Write & Deploy a Smart Contract
How to Write & Deploy a Smart Contract
All Things Open
 
Scaling Web Applications with Background
Scaling Web Applications with BackgroundScaling Web Applications with Background
Scaling Web Applications with Background
All Things Open
 
Build Developer Experience Teams for Open Source
Build Developer Experience Teams for Open SourceBuild Developer Experience Teams for Open Source
Build Developer Experience Teams for Open Source
All Things Open
 
Sudo – Giving access while staying in control
Sudo – Giving access while staying in controlSudo – Giving access while staying in control
Sudo – Giving access while staying in control
All Things Open
 
Fortifying the Future: Tackling Security Challenges in AI/ML Applications
Fortifying the Future: Tackling Security Challenges in AI/ML ApplicationsFortifying the Future: Tackling Security Challenges in AI/ML Applications
Fortifying the Future: Tackling Security Challenges in AI/ML Applications
All Things Open
 
Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...
Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...
Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...
All Things Open
 

Mehr von All Things Open (20)

Building Reliability - The Realities of Observability
Building Reliability - The Realities of ObservabilityBuilding Reliability - The Realities of Observability
Building Reliability - The Realities of Observability
 
Modern Database Best Practices
Modern Database Best PracticesModern Database Best Practices
Modern Database Best Practices
 
Open Source and Public Policy
Open Source and Public PolicyOpen Source and Public Policy
Open Source and Public Policy
 
Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...
Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...
Weaving Microservices into a Unified GraphQL Schema with graph-quilt - Ashpak...
 
The State of Passwordless Auth on the Web - Phil Nash
The State of Passwordless Auth on the Web - Phil NashThe State of Passwordless Auth on the Web - Phil Nash
The State of Passwordless Auth on the Web - Phil Nash
 
Total ReDoS: The dangers of regex in JavaScript
Total ReDoS: The dangers of regex in JavaScriptTotal ReDoS: The dangers of regex in JavaScript
Total ReDoS: The dangers of regex in JavaScript
 
What Does Real World Mass Adoption of Decentralized Tech Look Like?
What Does Real World Mass Adoption of Decentralized Tech Look Like?What Does Real World Mass Adoption of Decentralized Tech Look Like?
What Does Real World Mass Adoption of Decentralized Tech Look Like?
 
How to Write & Deploy a Smart Contract
How to Write & Deploy a Smart ContractHow to Write & Deploy a Smart Contract
How to Write & Deploy a Smart Contract
 
Spinning Your Drones with Cadence Workflows, Apache Kafka and TensorFlow
 Spinning Your Drones with Cadence Workflows, Apache Kafka and TensorFlow Spinning Your Drones with Cadence Workflows, Apache Kafka and TensorFlow
Spinning Your Drones with Cadence Workflows, Apache Kafka and TensorFlow
 
DEI Challenges and Success
DEI Challenges and SuccessDEI Challenges and Success
DEI Challenges and Success
 
Scaling Web Applications with Background
Scaling Web Applications with BackgroundScaling Web Applications with Background
Scaling Web Applications with Background
 
Supercharging tutorials with WebAssembly
Supercharging tutorials with WebAssemblySupercharging tutorials with WebAssembly
Supercharging tutorials with WebAssembly
 
Using SQL to Find Needles in Haystacks
Using SQL to Find Needles in HaystacksUsing SQL to Find Needles in Haystacks
Using SQL to Find Needles in Haystacks
 
Configuration Security as a Game of Pursuit Intercept
Configuration Security as a Game of Pursuit InterceptConfiguration Security as a Game of Pursuit Intercept
Configuration Security as a Game of Pursuit Intercept
 
Scaling an Open Source Sponsorship Program
Scaling an Open Source Sponsorship ProgramScaling an Open Source Sponsorship Program
Scaling an Open Source Sponsorship Program
 
Build Developer Experience Teams for Open Source
Build Developer Experience Teams for Open SourceBuild Developer Experience Teams for Open Source
Build Developer Experience Teams for Open Source
 
Deploying Models at Scale with Apache Beam
Deploying Models at Scale with Apache BeamDeploying Models at Scale with Apache Beam
Deploying Models at Scale with Apache Beam
 
Sudo – Giving access while staying in control
Sudo – Giving access while staying in controlSudo – Giving access while staying in control
Sudo – Giving access while staying in control
 
Fortifying the Future: Tackling Security Challenges in AI/ML Applications
Fortifying the Future: Tackling Security Challenges in AI/ML ApplicationsFortifying the Future: Tackling Security Challenges in AI/ML Applications
Fortifying the Future: Tackling Security Challenges in AI/ML Applications
 
Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...
Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...
Securing Cloud Resources Deployed with Control Planes on Kubernetes using Gov...
 

Kürzlich hochgeladen

Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 

Kürzlich hochgeladen (20)

Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 

Deploying MongoDB sharded clusters easily with Terraform and Ansible

  • 1. Deploying MongoDB sharded clusters easily with Terraform and Ansible All Things Open, October 2021 Ivan Groenewold
  • 2. Agenda ● Terraform 101 ● Provisioning in GCP ● Ansible 101 ● Deploying MongoDB ● Q&A
  • 3. About me ● @igroenew ● Architect at Percona ● Based in Argentina
  • 4. MongoDB sharding in a nutshell Image © MongoDB Inc.
  • 6. The plan ● Define the topology ● Provision the infrastructure using Terraform ○ instances, disks, network, buckets, etc. ● Install the software with Ansible ○ MongoDB, monitoring & backup solution
  • 7. Terraform 101 ● Infrastructure-as-Code ● Open Source ● Works with multiple resources and providers ● Declarative approach - state what you want ● Infrastructure converges to the desired state
  • 8. Terraform syntax ● Based in HashiCorp Configuration Language (HCL) ● Basic constructs: ○ Arguments name = “my_instance” ○ Blocks resource “google_compute_instance” “my_instance” { … } type label 1 label 2 body
  • 9. Defining variables variable "data_disk_type" { default = "pd-standard" } variable "my_instance_type" { default = "e2-standard-2" description = "instance type" } variable "my_volume_size" { default = "100" description = "storage size" } variable "centos_amis" { description = "CentOS AMIs on each region" default = { northamerica-northeast1 = "centos-8-v20210316" northamerica-northeast2 = "centos-8-v20210316" } }
  • 10. Provisioning in GCP resource "google_compute_disk" "cfg_disk" { name = "mongo-cfg0-data" type = var.data_disk_type size = var.my_volume_size zone = var.my_zone } resource "google_compute_instance" "cfg" { name = "my_instance" machine_type = var.my_instance_type tags = ["mongodb-cfg"] zone = var.my_zone boot_disk { initialize_params { image = lookup(var.centos_amis, var.region) } } attached_disk { source = google_compute_disk.cfg_disk.name } network_interface { network = google_compute_network.vpc-network.id subnetwork = google_compute_subnetwork.vpc-subnet.id } provision a disk provision an instance
  • 11. Provisioning in GCP (2) resource "google_compute_disk" "cfg_disk" { name = "mongo-cfg0-data" type = var.data_disk_type size = var.my_volume_size zone = var.my_zone } resource "google_compute_instance" "cfg" { name = "my_instance" machine_type = var.my_instance_type tags = ["mongodb-cfg"] zone = var.my_zone boot_disk { initialize_params { image = lookup(var.centos_amis, var.region) } } attached_disk { source = google_compute_disk.cfg_disk.name } network_interface { network = google_compute_network.vpc-network.id subnetwork = google_compute_subnetwork.vpc-subnet.id } nested blocks call lookup function
  • 12. Working with Terraform ● terraform init ○ Initialize the working directory ● terraform plan ○ print the action plan ● terraform apply ○ carry out the actions ● terraform destroy ○ remove all managed resources
  • 14. Working with Terraform (3) ● What is a MongoDB server? ○ Instance + Persistent disk (except mongos servers) ○ Firewall rules ○ Init scripts ■ mount the volumes, OS tweaks, etc
  • 15. Working with Terraform (4) ● Create .tf files for each component ■ mongos router ■ mongod shard ■ Config server ■ anything else? ● Use a separate variables file
  • 16. Provisioning the infrastructure ● Servers ○ cfg-server.tf ○ shard-server.tf ○ mongos-server.tf ○ pmm-server.tf ● variables.tf ● network.tf ● backup.tf
  • 17. Configuring the network ● Define the region ● Configure a VPC ● Define the subnets
  • 18. Configuring the network (2) data "google_compute_zones" "available" { status = "UP" } resource "google_compute_network" " vpc-network" { name = "my-vpc" auto_create_subnetworks = false } resource "google_compute_subnetwork" "vpc-subnet" { name = "mongodb-subnet" ip_cidr_range = "10.1.0.0/16" region = var.region network = google_compute_network. vpc-network.id } query data source
  • 19. Creating the instances resource "google_compute_instance" "server" { count = 6 name = "server ${count.index}" zone = data.google_compute_zones.available.names[ count.index % 3] ● Use count.index to shuffle the instances between AZ’s
  • 20. Configuring network access resource "google_compute_firewall" "mongodb-cfgsvr-firewall" { name = "mongodb-cfgsvr-firewall" network = google_compute_network.vpc-network.name direction = "INGRESS" target_tags = ["mongodb-cfg"] allow { protocol = "tcp" ports = ["22", "27019"] } }
  • 21. Preparing the backup infrastructure ● Create a Cloud Storage bucket ● Allow the instances to read/write from it ● Objects lifecycle policy
  • 22. Preparing the backup infrastructure (2) ● Steps are cloud-specific ● For GCP we need: ○ Cloud Storage bucket ○ Service account ○ HMAC key-pair for the service account ○ Grant storage-admin role to the service account
  • 23. Preparing the backup infrastructure (3) resource "google_storage_bucket" "mongo-backups" { name = "mongo-backups" location = var.region force_destroy = true uniform_bucket_level_access = true resource "google_service_account" "mongo-backup-service-account" { account_id = "mongo-backup-service-account" display_name = "Mongo Backup Service Account" } resource "google_storage_hmac_key" "mongo-backup-service-account" { service_account_email = google_service_account.mongo-backup-service-account.email } resource "google_storage_bucket_iam_binding" "binding" { bucket = google_storage_bucket.mongo-backups.name role = "roles/storage.admin" members = [ "serviceAccount:${google_service_account.mongo-backup-service-account.email}", ] }
  • 24. ● PMM client ○ run locally on each server ○ pushes metrics ● PMM server ○ Performance metrics history ○ Query analytics ○ Integrated alerting ○ Integrated backups (WIP) https://pmmdemo.percona.com Monitoring
  • 25. What’s next? ● We have the servers ● We have the network configured ● We have the backup infrastructure ● We need to deploy the software
  • 26. Ansible 101 ● Automation engine ● SSH-based ● Open source ● Web interface: AWX project
  • 27. Why Ansible? ● Easy to deploy ● No agent required ● No firewall rules required ● YAML syntax ● Secure
  • 28. Installing Ansible ● Control machine ○ Can be your laptop ○ Acts as the Ansible “server” ○ Only needed when running Ansible code ● Managed nodes
  • 29. Inventory ● Inventory options ○ Static ■ ini or YML format ○ Dynamic ■ Scripts available for most cloud providers ■ Write your own plugin ● The default inventory is /etc/ansible/hosts
  • 30. Inventory (2) ● Static inventory example [webservers] www.myhost.com www.example.com [databases] db-[a:f].example.com [atlanta] dba.example.com http_port=80 maxRequestsPerChild=808 [atlanta:vars] ntp_server=ntp.atlanta.example.com
  • 31. Modules ● Ansible building blocks ● Should be idempotent Examples: $ ansible example -m ping www.example.com | SUCCESS => { "changed": false, "ping": "pong" } $ ansible example -m service -a "name=httpd state=started"
  • 32. Playbooks ● Orchestrate steps ● Composed of one or more plays ● Each play runs a number of tasks in order on a group of servers ○ e.g. call a module to do something ● YML format
  • 33. Playbooks (2) ● Inventory example: [webservers] web[01:10].example.com [databases] db[01:10].example.com
  • 34. --- - hosts: webservers tasks: - name: ensure apache is at the latest version yum: name: httpd state: latest - name: ensure apache is started service: name: httpd state: started - hosts: databases tasks: - name: ensure postgresql is at the latest version yum: name: postgresql state: latest play 1 play 2 task 1 task 2 ● Playbook example: Playbooks (3)
  • 35. Playbooks (4) Play 1: --- - hosts: webservers tasks: - name: ensure apache is at the latest version yum: name: httpd state: latest ... module host groups as per inventory
  • 36. Playbooks (5) ● Run with ansible-playbook command $ ansible-playbook all my_pb.yml [--limit *example.com]
  • 37. Playbooks (6) PLAY [all] *************************************************************************** TASK [check if specified os user exists] *************************************************************************** changed: [mysql1] ok: [mysql2] PLAY RECAP *************************************************************************** mysql1 : ok=1 changed=1 unreachable=0 failed=0 mysql2 : ok=1 changed=0 unreachable=0 failed=0
  • 38. Variables ● Simple variables foo: bar ● List datacenter: - us-east - us-west ● Dictionary foo: field1: one field2: two
  • 39. Automating MongoDB deployment 1. Create an Ansible inventory file 2. Edit the variables file 3. Run the ansible-playbook
  • 40. Automating MongoDB deployment 1. Create an Ansible inventory file 2. Edit the variables file 3. Run the ansible-playbook
  • 41. Inventory file for a sharded cluster ● One group per shard (“shardN”) ● A group for the config servers (“cfg”) ● A group for the routers (“mongos”)
  • 42. [shard1] host1.example.com mongodb_primary=True host2.example.com host3.example.com [shard2] host4.example.com mongodb_primary=True host5.example.com host6.example.com [cfg] host7.example.com mongodb_primary=True host8.example.com host9.example.com [mongos] host10.example.com Inventory file for a sharded cluster (2)
  • 43. Generating the inventory file with Terraform ● Use the local_file Terraform resource ● Use templates to dynamically create the groups ● How to generate an Ansible inventory from Terraform
  • 44. Automating MongoDB deployment 1. Create an Ansible inventory file 2. Edit the variables file 3. Run the ansible-playbook
  • 45. Variables ● Copy the MongoDB files / Install from repository ● Ports for mongod, mongos ● Define the paths for data, logs, etc. ● Authentication mechanism ● Encryption ● Backup
  • 46. Things we need done ● Install packages ● Create config files ● Start/stop processes ● Initialize replica sets ● Create users ● Configure backup job ● Add hosts to monitoring ● Add the shards to the cluster
  • 47. Installing packages packages: - percona-server-mongodb - percona-backup-mongodb - pmm2-client - name: install rpm from repo package: name: "{{ item }}" state: present with_items: "{{ packages }}" dynamic variable loop
  • 48. Creating configuration files ● Generate files dynamically ● Include/exclude different sections ● Variables are not enough ● Solution: Ansible Templates
  • 49. Ansible Templates ● Built-in module ● Create file with dynamic content ● Jinja2 engine ● Store them in /templates subdirectory
  • 50. Creating templated config files mongod.conf.j2 template: ... security: {% if use_tls %} clusterAuthMode: x509 {% else %} keyFile: {{ keyFile_loc }} {% endif %} ... Variables file: use_tls: false keyFile_loc: /var/lib/mongo/rskeyfile
  • 51. Creating templated config files (2) Task: - name: copy mongod.conf become: yes template: src: templates/mongod.conf.j2 dest: /etc/mongod.conf owner: root group: root mode: 0644
  • 52. Starting/stopping processes - name: start mongod on rs member become: yes service: name: mongod state: started
  • 53. [cfg] host1.example.com mongodb_primary=True host2.example.com host3.example.com [shard1] host4.example.com mongodb_primary=True host5.example.com host6.example.com [shard2] host6.example.com mongodb_primary=True host8.example.com host9.example.com [mongos] host10.example.com Initialize replica sets ● rs.initiate() Our inventory file: group_names array for “host1.example.com”: group_names = [ cfg ]
  • 54. Initialize replica sets (2) init-rs.js.j2: rs.initiate( { _id: "{{ group_names[0] }}", members: [ {% for h in groups[ group_names[0] ] %} { _id : {{ loop.index }}, host : "{{ h }}: {% if hostvars[inventory_hostname].group_names[0].startswith('shard') %} {{ shard_port }} {% else %} {{ cfgserver_port }} {% endif %}", priority: 1 } {% if not loop.last %} ,{% endif %} {% endfor %} ] }); the first group a host appears in all hosts part of the first group
  • 55. Initialize replica sets (3) init-rs.js: rs.initiate( { _id: "cfg", members: [ { _id : 0 , host : ”host1.example.com: 27018 ", priority: 1 } , ... ] });
  • 56. Initialize replica sets (4) - name: render the template for the init command template: src: templates/init-rs.js.j2 dest: /tmp/init-rs.js mode: 0644 when: mongodb_primary is defined and mongodb_primary - name: run the init command for the replica set shell: mongo --host localhost --port {{ mongo_port }} < /tmp/init-rs.js when: mongodb_primary is defined and mongodb_primary runs only once per replica-set
  • 57. Create users createUser.j2: db.getSiblingDB("admin").createUser({ user: "{{ mongodb_pmm_user }}", pwd: "{{ mongodb_pmm_user_pwd }}", roles: [ { role: "explainRole", db: "admin" }, { role: "clusterMonitor", db: "admin" }, { role: "read", db: "local" } ] });
  • 58. Create users (2) - name: prepare the command to create pmm user template: src: templates/createUser.js.j2 dest: /tmp/createUser.js mode: 0644 when: mongodb_primary is defined and mongodb_primary - name: run the command to create the user shell: mongo admin -u {{ root_user }} -p{{ mongo_root_password }} --port {{ mongo_port }} < /tmp/createUser.js when: mongodb_primary is defined and mongodb_primary
  • 59. Configure backup - name: set up backup cron job cron: name: pbm backup minute: 3 hour: 0 user: pbm job: /usr/bin/pbm backup --mongodb-uri "mongodb://{{ pbmuser }}:{{ pbmpwd }}@ {{ ansible_fqdn }}:{{ mongo_port }}" cron_file: pbm_daily_backup
  • 60. Configure monitoring - name: point pmm-client to the PMM server become: true shell: pmm-admin config --server-url=https://{{ pmm_server_user }}: {{ pmm_server_pwd }}@{{ pmm_server }}:443 --server-insecure-tls --force - name: add mongodb metrics exporter become: true shell: pmm-admin add mongodb --username={{ mongodb_pmm_user }} --password={{ mongodb_pmm_user_pwd }} --host={{ ansible_fqdn }} --port={{ cfg_server_port if ('cfg' in group_names) else shard_port }}
  • 61. Add the shards - name: add the shards hosts: shard* tasks: - name: add the shards to the cluster shell: mongo admin -uroot -p{{ mongo_root_password }} --port {{ mongos_port }} --eval "sh.addShard('{{ group_names[0] }}/{{ ansible_fqdn }}:{{ shard_port }}')" delegate_to: "{{ groups.mongos | first }}" when: mongodb_primary is defined and mongodb_primary
  • 62. Automating MongoDB deployment 1. Create an Ansible inventory file 2. Edit the variables file 3. Run the ansible-playbook ansible-playbook main.yml -i inventory.ini --ask-become-pass
  • 63. Putting it all together ● Define the topology ● Create the infrastructure using Terraform ● Generate the inventory file for Ansible ● Install the software with Ansible
  • 64. Putting it all together (2) ● Define the variables ○ variables.tf ○ Ansible vars file ● Run terraform apply ● Run ansible-playbook
  • 65. Benefits ● Define a process ● Save time ● Reuse code ● Streamline deployments ● Ensure resources are monitored (and backed up)
  • 66. Q&A Thank you for attending! https://www.percona.com/blog/author/ivan-groenewold/