Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Architecting cloud

1.530 Aufrufe

Veröffentlicht am

Architecting your application for the cloud

Veröffentlicht in: Technologie
  • Als Erste(r) kommentieren

  • Gehören Sie zu den Ersten, denen das gefällt!

Architecting cloud

  1. 1. Architecting your application for the cloud
  2. 2. Traditional solution <ul><li>Buy servers </li></ul><ul><li>Buy storage </li></ul><ul><li>Sign a CDN contract (Content Delivery Network) </li></ul><ul><li>Launch website/application </li></ul><ul><li>Manage scaling and provisioning </li></ul>
  3. 3. Cloud solution <ul><li>Benefits from Cloud Computing: </li></ul><ul><li>No need to buy IT Infrastructure </li></ul><ul><li>Deploy worldwide </li></ul><ul><li>Scale up/down when needed </li></ul><ul><li>Save time </li></ul><ul><li>Focus on your business </li></ul>
  4. 4. Stage One – The Beginning <ul><li>Simple architecture </li></ul><ul><li>Low complexity and overhead means quick development and lots of features, fast. </li></ul><ul><li>No redundancy, low operational costs – great for startups. </li></ul>
  5. 5. Stage 2 - More of the same, just bigger <ul><li>Business is becoming successful – risk tolerance low. </li></ul><ul><li>Add redundant firewalls, load balancers. </li></ul><ul><li>Add more web servers for high performance. </li></ul><ul><li>Scale up the database. </li></ul><ul><li>Add database redundancy. </li></ul><ul><li>Still simple  . </li></ul>
  6. 6. Stage 3 – The pain begins. <ul><li>Publicity hits. </li></ul><ul><li>Squid or varnish reverse proxy or high end load balancers. </li></ul><ul><li>Add even more web servers. Managing contents becomes painful. </li></ul><ul><li>Single database can’t cut it anymore. Splits read and write. All writes go to a single master server with read only slaves. </li></ul><ul><li>May require some re-coding of the apps. </li></ul>
  7. 7. Stage 4 – The pain intensifies <ul><li>Replication doesn’t work for everything. Single writes database – Too many writes – Replication takes too long. </li></ul><ul><li>Database partitioning starts to make sense. Certain features get their own database. </li></ul><ul><li>Shared storage makes sense for contents. </li></ul><ul><li>Requires significant re-architecting of the app and DB. </li></ul>
  8. 8. Stage 5 – This Really Hurts !! <ul><li>Panic sets in. Re-thinking entire application. Now we want to go for scale? </li></ul><ul><li>Can’t just partition on features – what else can we use? Geography, lastname, user Id etc. Create user-cluster. </li></ul><ul><li>All features available on each cluster. </li></ul><ul><li>Use a hashing scheme or master DB for locating which user belongs to which cluster. </li></ul>
  9. 9. Stage 6 – Getting a little less painful <ul><li>Scalable application and database architecture. </li></ul><ul><li>Acceptable performance. </li></ul><ul><li>Starting to add new features again. </li></ul><ul><li>Optimizing some of the code. </li></ul><ul><li>Still growing, but manageable. </li></ul>
  10. 10. Stage 7 – entering the unknown... <ul><li>Where are the remaining bottleneck? </li></ul><ul><ul><li>Power, Space </li></ul></ul><ul><ul><li>Bandwidth, CDN, Hosting provider big enough? </li></ul></ul><ul><ul><li>Firewall, load balancer bottlenecks? </li></ul></ul><ul><ul><li>Storage </li></ul></ul><ul><ul><li>Database technology limits – key/value store anyone? </li></ul></ul>
  11. 11. Amazon Services used Servers: Amazon EC2 Storage: Amazon S3 Database: Amazon RDS Content Delivery: Amazon CloudFront Extra: Autoscaling, Elastic Load Balancing
  12. 13. What is in step 1 Launched a Linux server (EC2) Installed a web server Downloaded the website Opened the website Now, our traffic goes up...
  13. 14. To reach fans worldwide, we need a CDN.
  14. 16. Changes in HTML code images/stirling1.jpg Becomes d135c2250.cloudfront.net/stirling1.jpg
  15. 17. What is in step 2 Uploaded files to Amazon S3 Enabled a Cloudfront Distribution Updated our picture location
  16. 18. Our IT Architecture needs an update
  17. 21. What is in step 3 We added Autoscaling, and watched it grow the number of servers We added Elastic Load Balancer
  18. 23. What we is in step 4 Launched a Database Instance Pointed the web servers to RDS Created a Read Replica Created a Snapshot
  19. 24. What is difficult about Databases?
  20. 25. Availablity Patterns <ul><li>Fail-over IP </li></ul><ul><li>Replication </li></ul><ul><ul><li>Master-slave </li></ul></ul><ul><ul><li>Master-master </li></ul></ul><ul><ul><li>Tree replication </li></ul></ul><ul><ul><li>Buddy replication </li></ul></ul>
  21. 26. Master-Slave Replication
  22. 27. Master-Slave Replication <ul><li>Assume both Master and Slave is running on Ubuntu Natty(11.04) with MySQL installed. </li></ul><ul><li>Configure the Master: we must configure the mysql to listen to all IP addresses. We move to </li></ul><ul><li>/etc/mysql/my.cnf </li></ul><ul><li>#skip-networking </li></ul><ul><li>#bind-address = 127.0.0.1 </li></ul><ul><li>Set the mysql log file, the database name that we will replicate and tell that this will be the master </li></ul><ul><li>log-bin = /var/log/mysql/mysql-bin.log binlog-do-db=exampledb server-id=1 </li></ul><ul><li>Then we made a restart: </li></ul><ul><li>/etc/init.d/mysql restart </li></ul>
  23. 28. Master – Slave Replication <ul><li>Now we enter the mysql on master server: </li></ul><ul><li>mysql -u root -p Enter password: </li></ul><ul><li>We grant all privileges for slave for this database: </li></ul><ul><li>GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%' IDENTIFIED BY '<some_password>'; FLUSH PRIVILEGES; </li></ul><ul><li>Then we run the following commands: </li></ul><ul><li>USE exampledb; FLUSH TABLES WITH READ LOCK; </li></ul><ul><li>This will show the master log file name and the read position: SHOW MASTER STATUS; </li></ul>
  24. 29. Master-Slave Replication <ul><li>We make a dump of the database of the master server: </li></ul><ul><li>mysqldump -u root -p<password> --opt exampledb > exampledb.sql </li></ul><ul><li>Or we can run this command on the slave to fetch the data from master: </li></ul><ul><li>LOAD DATA FROM MASTER; </li></ul><ul><li>Now we will unlock the tables: </li></ul><ul><li>mysql -u root -p Enter password: UNLOCK TABLES; quit; </li></ul>
  25. 30. Master-Slave Replication : Configure the Slave <ul><li>First we enter the slave mysql and create the database: </li></ul><ul><li>mysql -u root -p Enter password: CREATE DATABASE exampledb; quit; </li></ul><ul><li>We import the database using the mysql dump: </li></ul><ul><li>mysql -u root -p<password> exampledb < /path/to/exampledb.sql </li></ul><ul><li>Now we will configure the slave server: </li></ul><ul><li>/etc/mysql/my.cnf </li></ul><ul><li>We write the below information: </li></ul><ul><li>server-id=2 master-host=192.168.0.100 master-user=slave_user master-password=secret master-connect-retry=60 replicate-do-db=exampledb </li></ul><ul><li>Then we restart mysql: </li></ul><ul><li>/etc/init.d/mysql restart </li></ul>
  26. 31. Master-Slave Replication: Configure the Slave <ul><li>We can also load the database using the below command: </li></ul><ul><li>mysql -u root -p Enter password: LOAD DATA FROM MASTER; quit; </li></ul><ul><li>Then we stop the slave server. </li></ul><ul><li>mysql -u root -p Enter password: SLAVE STOP; </li></ul><ul><li>And we run the below command to adjust the master informations: </li></ul><ul><li>CHANGE MASTER TO MASTER_HOST='192.168.0.100', MASTER_USER='slave_user', MASTER_PASSWORD='<some_password>', MASTER_LOG_FILE='mysql-bin.006', MASTER_LOG_POS=183; </li></ul><ul><li>And then we start the slave server: </li></ul><ul><li>START SLAVE; quit; </li></ul>
  27. 32. Master-Master Replication:
  28. 33. Master-Master Replication: master1 configuration <ul><li>we will call system 1 as master1 and slave2 and system2 as master2 and slave 1. </li></ul><ul><li>We go to the master mysql configuration file: </li></ul><ul><li>/etc/mysql/my.cnf. </li></ul><ul><li>Then we add the below code block. We show the path and socket path, the log file for the db to replicate. </li></ul><ul><li>[mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock old_passwords=1 log-bin binlog-do-db=<database name>  </li></ul><ul><li>binlog-ignore-db=mysql            binlog-ignore-db=test server-id=1 [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] err-log=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid </li></ul><ul><li>mysql> grant replication slave on *.* to 'replication'@192.168.16.5 identified by 'slave'; </li></ul>
  29. 34. Master-Master Replication: slave2 configuration <ul><li>Now we enter the slave2 mysql configuration file. </li></ul><ul><li>[mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock old_passwords=1 </li></ul><ul><li>server-id=2 master-host = 192.168.16.4 master-user = replication master-password = slave master-port = 3306 [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] err-log=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid </li></ul>
  30. 35. Master-Master Replication: start master1/slave1 server <ul><li>We start the slave: </li></ul><ul><li>mysql> start slave; </li></ul><ul><li>mysql> show slave statusG; </li></ul><ul><li>             Slave_IO_State: Waiting for master to send event                 Master_Host: 192.168.16.4                 Master_User: replica                 Master_Port: 3306               Connect_Retry: 60             Master_Log_File: MASTERMYSQL01-bin.000009         Read_Master_Log_Pos: 4              Relay_Log_File: MASTERMYSQL02-relay-bin.000015          </li></ul><ul><li>      Relay_Log_Pos: 3630       Relay_Master_Log_File: MASTERMYSQL01-bin.000009            Slave_IO_Running: Yes           Slave_SQL_Running: Yes             Replicate_Do_DB:         Replicate_Ignore_DB:          Replicate_Do_Table:      Replicate_Ignore_Table:     Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table:                  Last_Errno: 0                  Last_Error:                Skip_Counter: 0         Exec_Master_Log_Pos: 4             Relay_Log_Space: 3630             Until_Condition: None              Until_Log_File:               Until_Log_Pos: 0          Master_SSL_Allowed: No          Master_SSL_CA_File:          Master_SSL_CA_Path:             Master_SSL_Cert:           Master_SSL_Cipher:              Master_SSL_Key:       Seconds_Behind_Master: 1519187 </li></ul>
  31. 36. Master-Master Replication: Creating the master2/slave2 <ul><li>On Master2/Slave 1, edit my.cnf and master entries into it: </li></ul><ul><li>  [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock </li></ul><ul><li>old_passwords=1 server-id=2 master-host = 192.168.16.4 master-user = replication master-password = slave master-port = 3306 log-bin                     binlog-do-db=adam [mysql.server] user=mysql basedir=/var/lib </li></ul><ul><li>[mysqld_safe] err-log=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid </li></ul><ul><li>Create a replication slave account on master2 for master1: </li></ul><ul><li>mysql> grant replication slave on *.* to 'replication'@192.168.16.4 identified by 'slave2'; </li></ul>
  32. 37. Master-Master Replication: Creating the master2/slave2 <ul><li>Edit my.cnf on master1 for information of its master. </li></ul><ul><li>[mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock old_passwords=1 log-bin binlog-do-db=adam binlog-ignore-db=mysql binlog-ignore-db=test server-id=1 #information for becoming slave. master-host = 192.168.16.5 master-user = replication master-password = slave2 master-port = 3306 </li></ul><ul><li>[mysql.server]user=mysql </li></ul><ul><li>basedir=/var/lib  </li></ul>
  33. 38. Master-Master Replication: <ul><li>Restart both mysql master1 and master2. </li></ul><ul><li>On mysql master1: </li></ul><ul><li>mysql> start slave; </li></ul><ul><li>On mysql master2:  </li></ul><ul><li>mysql > show master status; </li></ul><ul><li>On mysql master 1: </li></ul><ul><li>mysql> show slave statusG; </li></ul>
  34. 39. Managing overload
  35. 40. Load Balancing Algorithm <ul><li>Random allocation </li></ul><ul><li>Round robin allocation </li></ul><ul><li>Weighted allocation </li></ul><ul><li>Dynamic load balancing </li></ul><ul><li>Least connections </li></ul><ul><li>Least server CPU </li></ul>
  36. 41. Load Balancer in Rackspace <ul><li>1.    Add a cloud load balancer. If you already have a Rackspace Cloud account, use the “Create Load Balancer” API operation.  </li></ul><ul><li>2.   Configure cloud load balancer. Then we select name, protocol, port, algorithm, and which servers we need load balanced. </li></ul><ul><li>3.   Enjoy the cloud load balancer which will be online in just a few minutes. each cloud load balancer can be customized or removed as our needs change. </li></ul>
  37. 42. Security
  38. 43. Security <ul><li>Firewalls – iptables. </li></ul><ul><li>The iptables program lets slice admins configure the Linux kernel firewall </li></ul><ul><li>Logrotator. </li></ul><ul><li>&quot;Log rotation&quot; refers to the practice of archiving an application's current log, starting a fresh log, and deleting older logs. </li></ul>
  39. 44. Iptables
  40. 45. Configuring the IPtable <ul><li>sudo /sbin/iptables -F </li></ul><ul><li>sudo /sbin/iptables -A INPUT -i eth0 -p tcp -m tcp --dport 30000 -j ACCEPT </li></ul><ul><li>sudo /sbin/iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT </li></ul><ul><li>sudo /sbin/iptables -A INPUT -j REJECT </li></ul><ul><li>sudo /sbin/iptables -A FORWARD -j REJECT </li></ul><ul><li>sudo /sbin/iptables -A OUTPUT -j ACCEPT </li></ul><ul><li>sudo /sbin/iptables -I INPUT -i lo -j ACCEPT </li></ul><ul><li>sudo /sbin/iptables -I INPUT 5 -p tcp --dport 80 -j ACCEPT sudo /sbin/iptables -I INPUT 5 -p tcp --dport 443 -j ACCEPT </li></ul>
  41. 46. Secure?? <ul><li>DDoS Attack: Dynamic Denial of Service attack. </li></ul><ul><li>Wikileaks.com is it alive? </li></ul>
  42. 47. Log Rotate <ul><li>/etc/logrotate.conf </li></ul><ul><li>ls /etc/logrotate.d </li></ul><ul><li>/var/log/apache2/*.log { </li></ul><ul><li>weekly </li></ul><ul><li>missingok </li></ul><ul><li>rotate 52 </li></ul><ul><li>compress </li></ul><ul><li>delaycompress </li></ul><ul><li>notifempty </li></ul><ul><li>create 640 root adm </li></ul><ul><li>sharedscripts </li></ul><ul><li>postrotate if [ -f &quot;`. /etc/apache2/envvars ; </li></ul><ul><li>echo ${APACHE_PID_FILE:-/var/run/apache2.pid}`&quot; ]; then /etc/init.d/apache2 reload > /dev/null fi </li></ul><ul><li>endscript } </li></ul>
  43. 48. Failover IP <ul><li>You can actually 'share' an IP between two servers so when one server is not available the other takes over the IP address. </li></ul><ul><li>For this you need two servers. Let's keep it simple and call one the 'Master‘ and one the 'Slave'. </li></ul><ul><li>What this comes down to is creating a High Availability network with your Slices. Your site won't go down. </li></ul>
  44. 49. Heartbeat <ul><li>The failover system is not automatic. You need to install an application to allow the failover to occur. </li></ul><ul><li>Heartbeat runs on both the Master and Slave servers. They chat away and keep an eye on each other. If the Master goes down, the Slave notices this and brings up the same IP address that the Master was using. </li></ul>
  45. 50. How to Configure Heartbeat <ul><li>sudo aptitude update Once you have done that, have a check to see if anything needs upgrading on the server: </li></ul><ul><li>sudo aptitude safe-upgrade </li></ul><ul><li>sudo aptitude install heartbeat </li></ul><ul><li>/etc/heartbeat/ </li></ul>
  46. 51. Configuring Heartbeat <ul><li>sudo nano /etc/heartbeat/authkeys The contents are as simple as this: </li></ul><ul><li>auth 1 </li></ul><ul><li>1 sha1 YourSecretPassPhrase </li></ul><ul><li>sudo chmod 600 /etc/heartbeat/authkeys </li></ul>
  47. 52. Configuring Heartbeat <ul><li>sudo nano /etc/heartbeat/haresources </li></ul><ul><li>master 123.45.67.890/24 </li></ul><ul><li>The name 'master' is the hostname of the MASTER server and the IP address (123.45.67.890) is the IP address of the MASTER server. </li></ul><ul><li>To drive this home, this file needs to be the same on BOTH servers. </li></ul>
  48. 53. Master ha.cf file <ul><li>sudo nano /etc/heartbeat/ha.cf The contents would be as follows: </li></ul><ul><li>logfacility daemon </li></ul><ul><li>keepalive 2 </li></ul><ul><li>deadtime 15 </li></ul><ul><li>warntime 5 </li></ul><ul><li>initdead 120 </li></ul><ul><li>udpport 694 </li></ul><ul><li>ucast eth1 172.0.0.0 # The Private IP address of your SLAVE server. auto_failback on </li></ul><ul><li>node master # The hostname of your MASTER server. </li></ul><ul><li>node slave # The hostname of your SLAVE server. </li></ul><ul><li>respawn hacluster /usr/lib/heartbeat/ipfail </li></ul><ul><li>use_logd yes </li></ul>
  49. 54. Creating Slave ha.cf <ul><li>Let's open the file on the Slave server: </li></ul><ul><li>sudo nano /etc/heartbeat/ha.cf The contents will need to be: </li></ul><ul><li>logfacility daemon </li></ul><ul><li>keepalive 2 </li></ul><ul><li>deadtime 15 </li></ul><ul><li>warntime 5 </li></ul><ul><li>initdead 120 </li></ul><ul><li>udpport 694 </li></ul><ul><li>ucast eth1 172.0.0.1 # The Private IP address of your MASTER server. </li></ul><ul><li>auto_failback on </li></ul><ul><li>node master </li></ul><ul><li>node slave </li></ul><ul><li>respawn hacluster /usr/lib/heartbeat/ipfail </li></ul><ul><li>use_logd yes </li></ul><ul><li>Once done, save the file and restart Heartbeat on the Slave Slice: </li></ul><ul><li>sudo /etc/init.d/heartbeat restart </li></ul>
  50. 55. Testing the failover IP <ul><li>Start off with both servers running and ping the main IP (the IP we have set to be the failover) on the Master server: </li></ul><ul><li>ping -c2 123.45.67.890 </li></ul><ul><li>The '-c2' option simply tells ping to 'ping' twice. </li></ul><ul><li>Now shutdown the Master Slice: </li></ul><ul><li>sudo shutdown -h now </li></ul><ul><li>Without the failover IP, there would be no response from the ping request as the server is down. </li></ul><ul><li>We will notice that the IP is still responding to pings. </li></ul>
  51. 56. Who Am I? <ul><li>Tahsin Hasan </li></ul><ul><li>Senior Software Engineer </li></ul><ul><li>Tasawr Interactive. </li></ul><ul><li>Author of two books ‘Joomla Mobile Web Development Beginner’s Guide’ and ‘Opencart 1.4 Template Design Cookbook’with Packt Publishers Uk. </li></ul><ul><li>[email_address] </li></ul><ul><li>http://newdailyblog.blogspot.com (tahSin’s gaRage). </li></ul>
  52. 57. Questions?

×