Weitere ähnliche Inhalte Ähnlich wie Tungsten University: Configure and provision Tungsten clusters (20) Kürzlich hochgeladen (20) Tungsten University: Configure and provision Tungsten clusters1. Tungsten University:
Configure & provision
Tungsten clusters
Je! Mace, Neil Armitage
©Continuent 2013
2. About Continuent
• The leading provider of clustering and
replication for open source DBMS
• Tungsten Clustering - Commercial-grade HA,
performance scaling and data management
for MySQL
• Tungsten Replication - Flexible, high-
performance replication
©Continuent 2013 2
4. Application Application
MySQL Client API MySQL Client API
Replicator Replicator Replicator
Slave Master Slave
Existing MySQL Replication
©Continuent 2013 4
5. Application Application
MySQL Client API MySQL Client API
Replicator Replicator Replicator
Slave Master Slave
Step 1: Replace MySQL Replication
©Continuent 2013 5
6. Application Application
MySQL Client API MySQL Client API
Manager Manager Manager
Replicator Replicator Replicator
Slave Master Slave
Step 2: Add Manager Process
©Continuent 2013 6
7. Application Application
Tungsten Connector Tungsten Connector
Manager Manager Manager
Replicator Replicator Replicator
Slave Master Slave
Step 3. Add Intelligent Connector
©Continuent 2013 7
8. Application Application
Tungsten Connector Tungsten Connector
Monitoring and control
Monitoring and control
Manager Manager Manager
Replicator Replicator Replicator
Slave Master Slave
Data Service
Step 4. Connector Connectivity and Communication
©Continuent 2013 8
10. NYC London
Manual Failover
©Continuent 2013 10
11. Evaluating Tungsten
• Con"guring servers
• Viewing cluster status
• Connectivity
• Testing cluster operations
• Next steps
• Questions
©Continuent 2013 11
13. Create a Security Group
• Create a security group for all Tungsten
servers
• It should be created in the AZ you will use for
servers
All TCP traffic within the security group
All UDP traffic within the security group
All ICMP traffic within the security group
TCP port 22, 9999 and 13306 from '0.0.0.0/0'
©Continuent 2013 13
14. Launch Servers
• Create 5 EC2 servers in a single AZ
• They should be m1.large or greater
• Use the Amazon Linux AMI
• Set the root volume to be large enough for
your test data set
©Continuent 2013 14
15. Logging Into the Servers
# SSH access must be done to the ec2-user account
$> ssh ec2-user@ec2-184-72-189-135.compute-1.amazonaws.com
# Then use sudo to gain access to the root user
$ ip-184-72-189-135> sudo su -
©Continuent 2013 15
16. Set Server Hostnames
# We will use the following hostnames
# db1.nyc.tu
# db2.nyc.tu
# db3.nyc.tu
# db1.london.tu
# db2.london.tu
$ ip-184-72-189-135> hostname db1.nyc.tu
$ ip-184-72-189-135> sed -i "/^HOSTNAME=/c
HOSTNAME=`hostname`" /etc/sysconfig/network
# You must logout completely and back in
# for the change to take effect
©Continuent 2013 16
17. Modify /etc/hosts
• Add entries to /etc/hosts on each server
• Use the private IP address for each host
$ db1> /sbin/ifconfig eth0 | grep "inet addr"
inet addr: 10.112.24.214 Bcast:
10.112.25.255 Mask:255.255.254.0
$ db1> echo "10.112.24.214 db1.nyc.tu
10.10.219.125 db2.nyc.tu
10.242.134.18 db3.nyc.tu
10.10.102.83 db1.london.tu
10.112.74.196 db2.london.tu" >> /etc/hosts
©Continuent 2013 17
18. Install Software Packages
$ db1> yum -y install mysql-server which curl bc rsync wget
java-1.6.0-openjdk ruby
$ db1> rpm -i http://www.percona.com/redir/downloads/
XtraBackup/LATEST/RPM/rhel6/x86_64/percona-
xtrabackup-2.0.4-484.rhel6.x86_64.rpm
©Continuent 2013 18
19. Create MySQL Users
$ db1> mysql -e "grant all on *.* to 'tungsten'@'%' identified
by 'secret' with grant option"
$ db1> mysql -e "grant all on *.* to 'app'@'%' identified by
'secret'"
$ db1> mysql -e "revoke super on *.* from 'app'@'%'"
$ db1> mysql -e "delete from mysql.user where user=''"
$ db1> mysql -e "flush privileges"
©Continuent 2013 19
20. Rinse & Repeat
• The requirements must be completed on each
server
• Use a di!erent server-id
• Use the same SSH id_rsa and id_rsa.pub
• Full details at
https://docs.continuent.com/wiki/display/TEDOC/System
+Requirements
• Sample Install scripts will be posted on
docs.continuent.com
©Continuent 2013 20
21. Installation
• Installation completed from a staging
directory using tungsten-cookbook or tpm
• tungsten-cookbook runs tpm plus some
additional tests
• Staging con"guration stored in
$CONTINUENT_PROFILES
©Continuent 2013 21
22. Installation
• Installation completes several steps
• Copy software to each server
• Validate con"guration
• Write con"guration "les
• Start services
©Continuent 2013 22
23. Viewing Cluster Status
$ db1> trepctl status
$ db1> cctrl -multi
Tungsten Enterprise 1.5.3 build 59
connect to 'usa@db1.nyc.tu'
usa: session established
[LOGICAL] / > ls
[LOGICAL] / > use usa
[LOGICAL] /usa > ls
[LOGICAL] /usa > use europe
[LOGICAL] /europe > ls
[LOGICAL] /europe > use world
[LOGICAL] /world > ls
©Continuent 2013 23
24. Connectivity
$ db1> mysql -h`hostname` -P9999 -uapp -p
mysql> select @@hostname;
[LOGICAL] /usa > switch
mysql> select @@hostname;
mysql> begin;
mysql> select @@hostname;
[LOGICAL] /usa > switch
mysql> select @@hostname;
©Continuent 2013 24
25. Read/Write Splitting
$ db1> vi $CONTINUENT_ROOT/tungsten/tungsten-connector/conf/
user.map
mysql> select @@hostname;
mysql> select @@hostname for update;
mysql> begin; select @@hostname; rollback;
©Continuent 2013 25
27. Switching the Master Server
$ db1> cctrl -multi
[LOGICAL] /> use usa
[LOGICAL] /usa> switch
[LOGICAL] /usa> ls
[LOGICAL] /usa> switch to db1.nyc.tu
[LOGICAL] /usa> ls
[LOGICAL] /usa> use world
[LOGICAL] /world> ls
[LOGICAL] /world> switch to europe
[LOGICAL] /world> ls
[LOGICAL] /world> use europe
[LOGICAL] /europe> ls
©Continuent 2013 27
28. Automatic Failover
$ db1> cctrl -multi
[LOGICAL] /> use europe
[LOGICAL] /europe> ls
mysql> select @@hostname for update;
$ db1> ssh db1.london.tu sudo /sbin/service mysqld stop
[LOGICAL] /europe> ls
mysql> select @@hostname for update;
$ db1> ssh db1.london.tu sudo /sbin/service mysqld start
[LOGICAL] /europe> datasource db1.london.tu recover
[LOGICAL] /europe> ls
[LOGICAL] /europe> switch to db1.london.tu
[LOGICAL] /europe> ls
©Continuent 2013 28
29. Site Failover
$ db1> ssh db1.london.tu /opt/continuent/tungsten/cluster-
home/bin/stopall;ssh db2.london.tu /opt/continuent/tungsten/
cluster-home/bin/stopall;ssh db1.london.tu sudo /sbin/service
mysqld stop;ssh db2.london.tu sudo /sbin/service mysqld stop
$ db1> cctrl -multi
[LOGICAL] /> use world
[LOGICAL] /world> ls
[LOGICAL] /world> datasource europe fail
[LOGICAL] /world> failover
[LOGICAL] /world> ls
©Continuent 2013 29
30. Site Recovery
$ db1> ssh db1.london.tu /opt/continuent/tungsten/cluster-
home/bin/startall;ssh db2.london.tu /opt/continuent/tungsten/
cluster-home/bin/startall;ssh db1.london.tu sudo /sbin/service
mysqld start;ssh db2.london.tu sudo /sbin/service mysqld start
$ db1> cctrl -multi
[LOGICAL] /> use europe
[LOGICAL] /europe> ls
[LOGICAL] /europe> use world
[LOGICAL] /world> recover using db1.london.tu
[LOGICAL] /world> ls
©Continuent 2013 30
32. Upgrade the Slaves
[LOGICAL] /usa> switch to db1.nyc.tu
[LOGICAL] /usa> datasource db2.nyc.tu backup
[LOGICAL] /usa> datasource db2.nyc.tu shun
$ db1> mysql -hdb2.nyc.tu -P13306 -utungsten -p
mysql> # Apply backwards compatible changes to db2.nyc.tu
# If unsuccessful
[LOGICAL] /usa> datasource db2.nyc.tu restore
# Restart the process
# If successful
[LOGICAL] /usa> datasource db2.nyc.tu welcome
# Repeat the above steps for all slaves
©Continuent 2013 32
33. Upgrade the Master
[LOGICAL] /usa> switch to db2.nyc.tu
[LOGICAL] /usa> datasource db1.nyc.tu backup
[LOGICAL] /usa> datasource db1.nyc.tu shun
$ db1> mysql -hdb2.nyc.tu -P13306 -utungsten -p
mysql> # Apply backwards compatible changes to db1.nyc.tu
# If unsuccessful
[LOGICAL] /usa> datasource db1.nyc.tu restore
# Restart the process
# If successful
[LOGICAL] /usa> datasource db1.nyc.tu welcome
[LOGICAL] /usa> switch to db1.nyc.tu
©Continuent 2013 33
34. Changing Tungsten Con"guration
• Con"guration changes are made through tpm
in the staging directory
• Use tpm configure to make changes
• Use tpm update to push changes out to each
server and restart services
©Continuent 2013 34
35. Changing Tungsten Con"guration
$ db1> cd /opt/continuent/software
$ db1> cd tungsten-enterprise-1.5.3-89
$ db1> ./tools/tpm configure usa
--dataservice-hosts=db1.nyc.tu,db2.nyc.tu,db3.nyc.tu
--dataservice-connectors=db1.nyc.tu,db2.nyc.tu,db3.nyc.tu
$ db1> ./tools/tpm update world
$ db1> cctrl -multi
©Continuent 2013 35
36. Next Steps
• Register at http://www.continuent.com/
downloads/software
• Initiate a POC with Continuent
• Testing production hardware and data sets
• Application testing
• Integration into monitoring and alerting
• Training for operations personnel
©Continuent 2013 36
38. Feedback
• Send any feedback to
tu@continuent.com
©Continuent 2013 38
39. Tungsten University Sessions
• Con"gure & provision Tungsten clusters
Tuesday, January 22 @ 15:00 GMT/16:00 CET
• Setup & operate Tungsten Replicator
Thursday, January 31 @ 10 am PT/1 pm ET
• Setup & operate Tungsten Replicator
Tuesday, February 5 @ 15:00 GMT/16:00 CET
©Continuent 2013 39
40. 560 S. Winchester Blvd., Suite 500 Our Blogs:
San Jose, CA 95128 http://scale-out-blog.blogspot.com
Tel +1 (866) 998-3642 http://datacharmer.blogspot.com
Fax +1 (408) 668-1009 http://flyingclusters.blogspot.com
e-mail: sales@continuent.com http://continuent-tungsten.blogspot.com
Continuent Website:
http://www.continuent.com
Tungsten Replicator 2.0:
http://code.google.com/p/tungsten-replicator
©Continuent 2013 40