Amazon RDS makes it easy to set up, operate, and scale, relational databases in the cloud. Amazon RDS for MySQL supports applications that require up to tens of thousands of IOPS, and allows you to scale on demand without administrative complexity. In this webinar, we will discuss best practices for getting the most out of Amazon RDS for MySQL, as well as techniques for migrating data to and from the service.
6. v
Why choose Amazon RDS?
• Schema design
• Query construction
• Query optimization
High availability
Backup and recovery
Isolation and security
Industry compliance
Push-button scaling
Automated patching
Advanced monitoring
Routine maintenance
Amazon RDS takes care of your time-consuming database management tasks,
freeing you to focus on your applications and business
You
RDS
7. v
We made it highly available, secure, easier, and cheaper
Push-button provisioning; automated scaling, patching, security, backups, restores,
and general care and feeding
Lower TCO because we manage the muck
► Get more leverage from your teams
► Focus on the things that differentiate you
Built-in high availability and cross-region replication across multiple data centers
Now even a small startup can leverage multiple data centers to design highly available
apps with over 99.95% availability
8. v
High availability with Multi-AZ deployments
Enterprise-grade fault tolerance solution for production databases
An Availability Zone is a physically distinct, independent infrastructure
Your database is synchronously replicated to another AZ in the same AWS region
Failover occurs automatically for the most important failure scenarios
10. v
Choose cross-region read replicas for faster disaster
recovery and enhanced data locality
• Promote a read replica to a master
for faster recovery in the event of
disaster
• Bring data close to your customer’s
applications in different regions
• Promote to a master for easy
migration
11. v
Choose cross-region snapshot copy for even
greater durability, ease of migration
• Copy a database snapshot to a different AWS region
• Warm standby for disaster recovery
• Or use it as a base for migration to a different region
13. v
Importing from an external MySQL source
DB Master
App
Backup
AWS Region
Replication
App
scp Load data
Staging server
DB Slave
14. v
External MySQL source considerations
• Take backup from a replica/slave
• Compress backups for faster transfer
• Use a staging server on Amazon EC2
• Use primary key sort order where possible
15. v
Target Amazon RDS instance considerations
• Hardware: More memory + IOPS = faster loading
• Amazon RDS configuration
• Disable Multi-AZ
• Disable binary logging – backup_retention=0
• MySQL configuration
• Set innodb_flush_log_at_trx_commit to 0
• Set auto_commit = 0
• Set unique checks and foreign key checks to 0
• Increase innodb_log_file_size (5.6)
16. Input format
SQL
Std. mysqldump output
Loads schema/data + Auto on/off
indexes & constraints
SQL execution could be slow
Sequential execution
Need to restart after failures
Smaller databases/tables (few GB)
Flat files
mysqldump –fields_terminated_by
Load schema and data separately + split
large tables
Faster if file sizes are small
Can load files in parallel
Can resume each file (checkpointing)
Larger databases/tables
17. v
Configure external replication source
mysql> GRANT SELECT,REPLICATION USER,REPLICATION CLIENT ON *.* TO repluser@‘<RDS
Endpoint>' IDENTIFIED BY ‘<password>';
Create replication user on the master
Record the “File” and the “Position” in the backup
$ mysqldump --databases sampledb --master-data=2 --single-transaction
-r sampledbdump.sql -u mysqluser –p mysqluserpassword
--
-- Position to start replication or point-in-time recovery from
--
-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin-changelog.000031', MASTER_LOG_POS=107;
18. v
Configure replication target
mysql> call mysql.rds_set_external_master(‘<master
server>',3306,‘<replicationuser>',‘<password>','mysql-bin-changelog.000031',107,0);
mysql> call mysql.rds_start_replication;
Configure the replication target and start replication
Stop the app pointing at the source. Stop replication after target catches up
mysql> call mysql.rds_stop_replication;
Promote target Amazon RDS database instance
mysql> call mysql.rds_reset_external_master;
Point the app at the target Amazon RDS database instance
19. v
Exporting data to an external source
AWS Region
App
Dump Data
Staging server
scp & load
MasterSlave
mysql> call mysql.rds_set_configuration('binlog retention hours', 48);
21. Move data to the same or different database engine
Keep your apps running during the migration
Start your first migration in 10 minutes or less
Replicate within, to, or from Amazon EC2 or RDS
AWS Database
Migration Service
25. v
MySQL 5.5 – Replication after failover
• MySQL 5.5 – sync_binlog = 0
• Master crash recovery => starts new bin log
• Slave => still processes position from older bin log
• sync_binlog = 1 => 2 to 4x performance hit
[ERROR] Slave I/O: Got fatal error 1236 from master when reading data from binary log:
‘Client requested master to start replication from impossible position;
26. v
MySQL 5.6 – Replication after failover
• sync_binlog = 1 by default
• Bin Log Group Commit improves performance
• Crash safe slaves improve replica reliability
Source: https://blogs.oracle.com/MySQL/entry/mysql_5_6_replication_performance
35. InnoDB cache warming for unplanned failures
mysql> CREATE EVENT ‘evt_dump_innodb_cache’
ON SCHEDULE EVERY 1 HOUR STARTS ‘2014-11-06 01:00:00’
DO BEGIN
CALL mysql.rds_innodb_buffer_pool_dump_now();
END
36. Schema changes
Considerations
Ease of use
Performance
Time to complete
Options
Read Replicas
Native 5.6 functionality
Third-party tools like
Percona
38. v
Native MySQL 5.6 feature
• Easy to use – Available by default
• In-place – No blocked DML in most cases
• Performance impact
• Data reorganization still needed in some cases
• No ability to control/throttle CPU or I/O utilization
• Could increase replica lag
• Time to complete
• Relatively less
42. v
Burst mode – GP2 & T2
• GP2 – SSD based Amazon EBS storage
• 3 IOPS per GB base performance
• Earn credits when usage below base
• Burst to 3000+ IOPS
• T2 – Amazon EC2 instance with burst capability
• Base performance + burst
• Earn credits per hour when below base performance
• Can store up to 24 hours worth of credits
• Amazon CloudWatch metrics to see credits & usage
45. v
Burst mode vs. Standard vs. Provisioned IOPS
0
1000
2000
3000
4000
5000
6000
7000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
TransactionsperSecond(TPS)
Hours
100% Read - 20GB data
db.m1.medium + 200GB standard
$0.575 per hour
46. v
0
1000
2000
3000
4000
5000
6000
7000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
TransactionsperSecond(TPS)
Hours
100% Read - 20GB data
db.m1.medium + 200GB standard db.m3.medium + 200G + 2000 IOPS
$0.575 per hour
$0.408 per hour
Burst mode vs. Standard vs. Provisioned IOPS
47. v
0
1000
2000
3000
4000
5000
6000
7000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
TransactionsperSecond(TPS)
Hours
100% Read - 20GB data
db.m1.medium + 200GB standard db.m3.medium + 200G + 2000 IOPS
db.m3.large + 200G + 2000 IOPS
$0.575 per hour
$0.408 per hour
$0.508 per hour
Burst mode vs. Standard vs. Provisioned IOPS
48. v
0
1000
2000
3000
4000
5000
6000
7000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
TransactionsperSecond(TPS)
Hours
100% Read - 20GB data
db.m1.medium + 200GB standard db.m3.medium + 200G + 2000 IOPS
db.m3.large + 200G + 2000 IOPS db.t2.medium + 200GB gp2
$0.105 per hour
$0.575 per hour
$0.408 per hour
$0.508 per hour
Burst mode vs. Standard vs. Provisioned IOPS
49. v
0
1000
2000
3000
4000
5000
6000
7000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
TransactionsperSecond(TPS)
Hours
100% Read - 20GB data
db.m1.medium + 200GB standard db.m3.medium + 200G + 2000 IOPS
db.m3.large + 200G + 2000 IOPS db.t2.medium + 200GB gp2
db.t2.medium + 1TB gp2
$0.105 per hour
$0.575 per hour
$0.233 per hour
$0.408 per hour
$0.508 per hour
Burst mode vs. Standard vs. Provisioned IOPS
50. Let us manage the muck and
keep your databases running
51. So you can focus on your business
and the things that differentiate you
52. v
Try Amazon RDS for free
• For your first year, at no charge…
• Enough free instance-hours to run a “micro” instance continuously
• 20 GB of database instance storage
• 20 GB for automated backups
• 10 million I/O operations per month
• Learn more about the AWS free tier:
http://aws.amazon.com/free/
53. v
Learn more about Amazon RDS
• Amazon RDS home page: http://aws.amazon.com/rds/
• Amazon RDS Frequently Asked Questions:
http://aws.amazon.com/rds/faqs/
• Links to Import Guides for each engine:
http://aws.amazon.com/rds/faqs/#9
54.
55. Online Labs & Training
Gain confidence and hands-on
experience with AWS.
Watch free Instructional Videos and
explore Self-Paced Labs
Instructor Led Classes
Learn how to design, deploy and
operate highly available, cost-effective
and secure applications on AWS in
courses led by qualified AWS instructors
Validate your technical expertise
with AWS and use practice exams
to help you prepare for AWS
Certification
AWS Certification
More info at http://aws.amazon.com/training