Weitere ähnliche Inhalte Ähnlich wie Amazon Aurora and AWS Database Migration Service (20) Mehr von Amazon Web Services (20) Amazon Aurora and AWS Database Migration Service1. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Amazon Aurora and
AWS Database Migration Service
2. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Agenda
• Amazon Aurora
• Architecture
• Key Features
• Migration Options
• AWS Database Migration Service
• AWS Schema Conversion Tool
• Demos
• Q & A
3. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Options for hosting databases
Self-managed EC2 instances Fully managed
Corporate data
center
Database DB on EC2
instance RDS
4. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Power, HVAC, net
Rack & stack
Server maintenance
OS patches
DB s/w patches
Database backups
High availability
DB s/w installs
OS installation
you
Scaling
App optimization
If you host your databases on-premises
5. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Power, HVAC, net
Rack & stack
Server maintenance
OS patches
DB s/w patches
Database backups
Scaling
High availability
DB s/w installs
OS installation
you
App optimization
If you host your databases in EC2
6. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Power, HVAC, net
Rack & stack
Server maintenance
OS patches
DB s/w patches
Database backups
App optimization
High availability
DB s/w installs
OS installation
you
Scaling
Database Tuning
Design Consultation
App optimization
Best Practices
If you choose a managed database service
7. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Relational databases
Fully managed and secure
Fast, predictable performance
Simple and fast to scale
Low cost, pay for what you use
Amazon
RDS
Amazon Aurora
8. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Speed and availability of high-end commercial databases
Simplicity and cost-effectiveness of open source databases
Drop-in compatibility with MySQL and PostgreSQL
Simple pay as you go pricing
Delivered as a managed service
Amazon Aurora: A relational database reimagined for the cloud
9. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
How is Amazon Aurora different?
10. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Traditional relational databases are hard to scale
Multiple layers of
functionality all in a
monolithic stack
Multiple layers of
functionality all in a
monolithic stack
SQL
Transactions
Caching
Logging
Storage
11. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Traditional approaches to scale databases
Each architecture is limited by the monolithic mindset
Even when you scale out, you’re still replicating the same stack.
SQL
Transactions
Caching
Logging
SQL
Transactions
Caching
Logging
Sharding
Coupled at the application layer
Application
Shared Nothing
Coupled at the SQL layer
Application
SQL
Transactions
Caching
Logging
SQL
Transactions
Caching
Logging
Shared Disk
Coupled at the caching and
storage layer
Storage
Application
Storage Storage
SQL
Transactions
Caching
Logging
Storage
SQL
Transactions
Caching
Logging
Storage
12. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
This is a problem…
For performance.
For scalability.
And for availability.
13. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Reimagining the relational database
What if you were inventing the database today?
You wouldn’t design it the way we did in 1970.
You’d build something that
Can scale out ….
Can self-heal ….
Leverages cloud services …
14. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Amazon Aurora: A service-oriented architecture applied to the database
Moved the logging and storage layer into a
multi-tenant, scale-out database-optimized
storage service
Integrated with other AWS services like
Amazon EC2, Amazon VPC, Amazon
DynamoDB, Amazon SWF, and Amazon
Route 53 for control plane operations
Integrated with Amazon S3 for continuous
backup with 99.999999999% durability
Control planeData plane
Amazon
DynamoDB
Amazon SWF
Amazon Route 53
Logging + Storage
SQL
Transactions
Caching
Amazon S3
1
2
3
AWS LambdaIAMAmazon CloudWatch
15. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
AZ 1 AZ 2 AZ 3
Amazon S3
Master
Read
Replica
Read
Replica
Read
Replica
Read
Replica
Massively scale-out storage
distributed across 3 AZs
• No need to specify storage.
• It’s allocated automatically in 10
GB increments as data grows.
• Eliminates hot spots
• High concurrent access
• Storage is automatically replicated
across 3 AZs for durability and HA.
• 6 copies of the data – 2 per AZ.
• Quorum model for writes & reads
• up to 15 Read Replicas
• increase read throughput
• use as failover targets.
• Share storage with Master
Aurora at a glance
16. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Sounds great…
So is it faster, scalable,
reliable, and available?
17. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
5X faster than MySQL
4 client machines with 1,000 connections each
WRITE PERFORMANCE READ PERFORMANCE
Single client machine with 1,600 connections
MySQL SysBench results
R3.8XL: 32 cores / 244 GB RAM
Five times higher throughput than stock MySQL,
based on industry standard benchmarks.
18. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Reproducing these results
https ://d0.a wsstat ic . com /product -m ark eting/Aurora /R DS_ Auro ra_Perf orm ance_Assessm ent_Benchm ark ing_v 1-2 .pdf
AMAZON
AURORA
R3.8XLARGE
R3.8XLARGE
R3.8XLARGE
R3.8XLARGE
R3.8XLARGE
• Create an Amazon VPC (or use an existing one).
• Create four EC2 R3.8XL client instances to run the
SysBench client. All four should be in the same AZ.
• Enable enhanced networking on your clients
• Tune your Linux settings (see whitepaper)
• Install Sysbench version 0.5
• Launch a r3.8xlarge Amazon Aurora DB Instance in
the same VPC and AZ as your clients
• Start your benchmark!
1
2
3
4
5
6
7
19. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
If only real world applications saw benchmark performance
POSSIBLE DISTORTIONS
Real world requests contend with each other
Real world metadata rarely fits in data dictionary cache
Real world data rarely fits in buffer cache
Real world production databases need to run HA
Beyond benchmarks
20. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
SysBench OLTP Workload
250 tables
Connections Amazon Aurora
RDS MySQL
w/ 30K IOPS
50 40,000 10,000
500 71,000 21,000
5,000 110,000 13,000
8x
U P TO
FASTER
Scaling User Connections
21. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Tables
Amazon
Aurora
MySQL
I2.8XL
local SSD
MySQL
I2.8XL
RAM disk
RDS MySQL
w/ 30K IOPS
(single AZ)
10 60,000 18,000 22,000 25,000
100 66,000 19,000 24,000 23,000
1,000 64,000 7,000 18,000 8,000
10,000 54,000 4,000 8,000 5,000
SysBench write-only workload
Measuring writes per second
1,000 connections
11x
U P TO
FASTER
Scaling Table Count
22. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
DB Size Amazon Aurora
RDS MySQL
w/ 30K IOPS
1GB 107,000 8,400
10GB 107,000 2,400
100GB 101,000 1,500
1TB 26,000 1,200
67x
U P TO
SYSBENCH WRITE-ONLY
DB Size Amazon Aurora
RDS MySQL
w/ 30K IOPS
80GB 12,582 585
800GB 9,406 69
CLOUDHARMONY TPC-C
136x
U P TO
Scaling Data Set
23. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Updates per
second Amazon Aurora
RDS MySQL
30K IOPS (single AZ)
1,000 2.62 ms 0 s
2,000 3.42 ms 1 s
5,000 3.94 ms 60 s
10,000 5.38 ms 300 s
SysBench Writeonly Workload
250 tables
500x
U P TO
LOWER LAG
Scaling With Replicas
24. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
How did we achieve this?
Do fewer IOs
Minimize network packets
Cache prior results
Offload the database engine
DO LESS WORK
Process asynchronously
Reduce latency path
Use lock-free data structures
Batch operations together
BE MORE EFFICIENT
DATABASES ARE ALL ABOUT I/O
NETWORK-ATTACHED STORAGE IS ALL ABOUT PACKETS/SECOND
HIGH-THROUGHPUT PROCESSING DOES NOT ALLOW CONTEXT SWITCHES
25. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
BINLOG DATA DOUBLE-WRITELOG FRM FILES
T Y P E O F W R IT E
MYSQL WITH REPLICA
EBS mirrorEBS mirror
AZ 1 AZ 2
Amazon S3
EBS
Amazon Elastic
Block Store (EBS)
Primary
Instance
Replica
Instance
1
2
3
4
5
Issue write to EBS – EBS issues to mirror, ack when both done
Stage write to standby instance through DRBD
Issue write to EBS on standby instance
IO FLOW
Steps 1, 3, 4 are sequential and synchronous
This amplifies both latency and jitter
Many types of writes for each user operation
Have to write data blocks twice to avoid torn writes
OBSERVATIONS
780K transactions
7,388K I/Os per million txns (excludes mirroring, standby)
Average 7.4 I/Os per transaction
PERFORMANCE
30 minute SysBench writeonly workload, 100GB dataset, RDS MultiAZ, 30K PIOPS
IO Traffic in MySQL
26. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
AZ 1 AZ 3
Primary
Instance
Amazon S3
AZ 2
Replica
Instance
AMAZON AURORA
ASYNC
4/6 QUORUM
DISTRIBUTED
WRITES
BINLOG DATA DOUBLE-WRITELOG FRM FILES
T Y P E O F W R IT E
IO FLOW
Only write redo log records; all steps asynchronous
No data block writes (checkpoint, cache replacement)
6X more log writes, but 9X less network traffic
Tolerant of network and storage outlier latency
OBSERVATIONS
27,378K transactions 35X MORE
950K I/Os per 1M txns (6X amplification) 7.7X LESS
PERFORMANCE
Boxcar redo log records – fully ordered by LSN
Shuffle to appropriate segments – partially ordered
Boxcar to storage nodes and issue writesReplica
Instance
IO Traffic in Aurora
27. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
PAGE CACHE
UPDATE
Aurora Master
30% Read
70% Write
Aurora Replica
100% New Reads
Shared Multi-AZ Storage
MySQL Master
30% Read
70% Write
MySQL Replica
30% New Reads
70% Write
SINGLE-THREADED
BINLOG APPLY
Data Volume Data Volume
Logical: Ship SQL statements to Replica
Write workload similar on both instances
Independent storage
Can result in data drift between Master and Replica
Physical: Ship redo from Master to Replica
Replica shares storage. No writes performed
Cached pages have redo applied
Advance read view when all commits seen
MYSQL READ SCALING AMAZON AURORA READ SCALING
IO Traffic in Aurora Replicas
28. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
What about availability?
“Performance only matters if your database is up”
29. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Amazon Aurora Storage Management
• Automatic storage scaling up to 64 TB—no performance impact
• Continuous, incremental backups to Amazon S3
• Instantly create user snapshots—no performance impact
• Automatic restriping, mirror repair, hot spot management, encryption
Up to 64 TB of storage—autoincremented in 10 GB units
up to 64 TB
30. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Amazon Aurora: Fault-tolerance
SQL
Transaction
AZ 1 AZ 2 AZ 3
Caching
SQL
Transaction
AZ 1 AZ 2 AZ 3
Caching
Read availabilityRead and write availability
6 copies across 3 Availability Zones
What can fail?
Segment failures (disks)
Node failures (machines)
AZ failures (network or datacenter)
Optimizations
4 out of 6 write quorum
3 out of 6 read quorum
Peer-to-peer replication for repairs
31. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Amazon Aurora: High-Availability
► Up to 15 promotable read replicas across multiple availability zones
► Re-do log based replication leads to low replica lag – typically < 10ms
► Reader end-point with load balancing and auto-scaling * NEW *
MASTER
READ
REPLICA
READ
REPLICA
READ
REPLICA
SHARED DISTRIBUTED STORAGE VOLUME
READER END-POINT
32. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Continuous Backup
Segment snapshot Log records
Recovery point
Segment 1
Segment 2
Segment 3
Time
• Take periodic snapshot of each segment in parallel; stream the redo logs to Amazon S3
• Backup happens continuously without performance or availability impact
• At restore, retrieve the appropriate segment snapshots and log streams to storage nodes
• Apply log streams to segment snapshots in parallel and asynchronously
Amazon S3
33. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Instant Crash Recovery
Traditional Databases
Have to replay logs since the last
checkpoint
Typically 5 minutes between checkpoints
Single-threaded in MySQL; requires a
large number of disk accesses
Amazon Aurora
Underlying storage replays redo records
on demand as part of a disk read
Parallel, distributed, asynchronous
No replay for startup
Checkpointed Data Redo Log
Crash at T0 requires
a re-application of the
SQL in the redo log since
last checkpoint
T0 T0
Crash at T0 will result in redo logs being
applied to each segment on demand, in
parallel, asynchronously
34. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Survivable Caches
We moved the buffer cache out of the
database process
Cache remains warm in the event of
database restart
Lets you resume fully loaded
operations much faster
Instant crash recovery + survivable
cache = quick and easy recovery from
DB failures
SQL
Transactions
Caching
SQL
Transactions
Caching
SQL
Transactions
Caching
Caching process is outside the DB process
and remains warm across a database restart
35. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Faster Fail-Over
App
RunningFailure Detection DNS Propagation
Recovery Recovery
DB
Failure
MYSQL
App
Running
Failure Detection DNS Propagation
Recovery
DB
Failure
AURORA WITH MARIADB DRIVER
1 5 - 2 0 s e c
3 - 2 0 s e c
The combination of
survivable caches and
instant crash recovery
makes failover very fast.
36. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Simulate failures using SQL
ALTER SYSTEM CRASH [{INSTANCE | DISPATCHER | NODE}]
ALTER SYSTEM SIMULATE percent_failure DISK failure_type IN
[DISK index | NODE index] FOR INTERVAL interval
ALTER SYSTEM SIMULATE percent_failure NETWORK failure_type
[TO {ALL | read_replica | availability_zone}] FOR INTERVAL interval
To cause the failure of a component at the database node:
To simulate the failure of disks:
To simulate the failure of networking:
37. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Key Features
38. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Cross Region Read Replicas
• Features
• Additional 15 Read
Replicas in New Region
• Very Low RPO & RTO
• Unencrypted Clusters
• Use Cases
• Cross Region Disaster
Recovery
• Cross Region Migration
• Regional Availability
39. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
• Features
• Share with Specific Accounts
• Create Public Snapshots
• Manually Generated, Unencrypted Snapshots
• Use Cases
• Separation of Environments (dev, test, prod)
• Partnering (vendors, customers)
• Data Dissemination (research, public datasets)
Cross Account Snapshot Sharing
40. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Create a copy of a database without
duplicate storage costs
• Creation of a clone is nearly instantaneous –
we don’t copy data
• Data copy happens only on write – when
original and cloned volume data differ
• Cost-effective – pay extra storage for specific
pages that have been updated
Typical use cases:
• Clone a production DB to run tests
• Reorganize a database
• Save a point in time snapshot for analysis
without impacting production system.
Production database
Clone Clone
Clone
Dev/test
applications
Benchmarks
Production
applications
Production
applications
Fast database cloning
41. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Security and Monitoring
42. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Encryption at rest and transit
• Isolates your data within an Amazon VPC
• Encryption at rest using Keys you create an
manage using KMS
• Data, automated backups, snapshots, and
replicas in the same cluster all automatically
encrypted.
• Seamless encryption and decryption,
requiring no changes to your application.
• Automatic encryption in transit
43. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Encryption at Rest
44. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Enhanced Monitoring
Amazon CloudWatch
metrics for RDS
CPU utilization
Storage
Memory
50+ system/OS metrics
1–60 second granularity
DB connections
Selects per second
Latency (read and write)
Cache hit ratio
Replica lag
CloudWatch alarms
Similar to on-premises custom
monitoring tools
45. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
AWS Lambda Integration
• Amazon Aurora version 1.8 and later.
• Integrate your Aurora DB with other AWS services.
• e.g. send a SNS notification on row insert into a specific table.
• Built-in stored procedure mysql.lambda_async invokes a Lambda
function asynchronously.
• Associate IAM role with Aurora
CALL mysql.lambda_async (
lambda_function_ARN,
lambda_function_input
)
46. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Example: Send Email from Aurora
import boto3
ses = boto3.client('ses')
def SES_send_email(event, context):
return ses.send_email(
Source=event['email_from'],
Destination={
'ToAddresses': [
event['email_to'],
]
},
Message={
'Subject': {'Data': event['email_subject']},
'Body': {'Text': {'Data': event['email_body']}
}
}
)
DROP PROCEDURE IF EXISTS SES_send_email;
DELIMITER ;;
CREATE PROCEDURE SES_send_email(IN
email_from VARCHAR(255),
IN email_to VARCHAR(255),
IN subject VARCHAR(255),
IN body TEXT) LANGUAGE SQL
BEGIN
CALL mysql.lambda_async(
'arn:aws:lambda:us-west-
2:123456789012:function:SES_send_email',
CONCAT('{"email_to" : "', email_to,
'", "email_from" : "', email_from,
'", "email_subject" : "', subject,
'", "email_body" : "', body, '"}')
);
END
;;
DELIMITER ;
mysql> call SES_send_email('example_to@amazon.com', 'example_from@amazon.com', 'Email subject', 'Email content');
47. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Backtrack quickly brings the database to a desired point in time.
No restore from backup. No copying of data. Not destructive – can backtrack many times.
Quickly recover from unintentional DML/DDL operations.
“Backtrack” provides near-instantaneous restores
T0 T1 T2
T0 T1
T2
T3 T4
T3
T4
REWIND TO T1
REWIND TO T3
INVISIBLE INVISIBLE
48. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
49. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Aurora Serverless
On-demand, auto-scaling,
serverless Aurora database
Starts up on demand, shuts down
when not in use
Scales up/down automatically
No application impact when
scaling
Pay per second, 1 minute
minimum
WARM POOL
OF INSTANCES
APPLICATION
DATABASE STORAGE
SCALABLE DB CAPACITY
REQUEST ROUTER
DATABASE END-POINT
50. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
CUSTOMER VPC
Database end-point provisioning
When you provision a database, Aurora
Serverless:
Provisions VPC end-points for the
application connectors
Initializes request routers to accept
connections
Creates an Aurora storage volume
A database instance is only provisioned
when the first request arrives
APPLICATION
CUSTOMER VPC
VPC
END-POINTS
VPC
END-POINTS
NETWORK LOAD BALANCER
STORAGE
VOLUME
REQUEST
ROUTERS
51. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Instance provisioning and scaling
First request triggers instance
provisioning. Usually 1-3 seconds
Instance auto-scales up and down as
workload changes. Usually 1-3 seconds
Instances hibernate after user-defined
period of inactivity
Scaling operations are transparent to
the application – user sessions are not
terminated
Database storage is persisted until
explicitly deleted by user
Use cases include: Infrequently used
applications (e.g. low-volume blog site);
spiky workload; Test & Development
databases
DATABASE STORAGE
WARM POOL
APPLICATION
REQUEST
ROUTER
CURRENT
INSTANCE
NEW
INSTANCE
52. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Aurora Multi-Master (In Preview)
GLOBAL
RESOURCE
MANAGER
LOCKING PROTOCOL MESSAGES
SHARED STORAGE
M1 M2 M3
M1 M1 M1M2 M3 M2
Scale out write performance
across multiple Availability Zones
Allow applications to direct
read/write workloads to multiple
instances in a cluster
Operate with higher availability.
53. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Multi-region Multi-Master (Announced)
Write accepted locally
Optimistic concurrency control – no distributed lock
manager, no chatty lock management protocol
REGION 1 REGION 2
HEAD NODES HEAD NODES
MULTI-AZ STORAGE VOLUME MULTI-AZ STORAGE VOLUME
LOCAL PARTITION LOCAL PARTITIONREMOTE PARTITION REMOTE PARTITION
Conflicts handled hierarchically – at head nodes, at
storage nodes, at AZ and region level arbitrators
Near-linear performance scaling when there is no or
low levels of conflicts
54. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Business Intelligence Data Integration Query and Monitoring SI and Consulting
Source: Amazon
“We ran our compatibility test suites against Amazon Aurora and everything
just worked." - Dan Jewett, Vice President of Product Management at Tableau
Well established ecosystem
55. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Migrating to Amazon Aurora
56. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Simplify migration from RDS MySQL
1. Establish baseline
a. RDS MySQL to Aurora DB
snapshot migration
b. MySQL dump/import
2. Catch-up changes
a. Binlog replication
b. Tungsten replicator
Application Users
MySQL Aurora
Network
57. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Data migration service
• Logical data replication from on-premise or EC2
• Code & schema conversion across engines
S3 integration
• Load partial datasets directly from / to S3
• Ingest large database snapshots (>2TB)
• Snowball integration
• Ingest huge database snapshots (>10TB)
• Send us your data in a suitcase!
Migration from EC2 & on-premise
58. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Move data to the same or different database engine
Keep your apps running during the migration
Start your first migration in 10 minutes or less
Replicate within, to, or from Amazon EC2 or RDS
AWS Database
Migration Service
59. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
AWS Database Migration Service
Homogeneous DB Migrations
• e.g. MySQL to MySQL/Aurora
Heterogenous DB Migrations
• e.g. Oracle to MySQL/Aurora
• AWS Schema Conversion Tool
Database Consolidation
60. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Features
Converting database schema
Converting data warehouse schema
Converting application SQL
Code browser that highlights places where
manual edits are required
Secure connections to your databases with SSL
The AWS Schema Conversion Tool helps automate many
database schema and code conversion tasks when
migrating from source to target database engines
AWS Schema Conversion Tool (AWS SCT)
61. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Database migration assessment
Connect SCT to source
and target databases
Run assessment report
Read executive
summary
Follow detailed
instructions
62. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Customer
premises
Application users
AWS
Internet
VPN
Start a replication instance
Connect to source and target
databases
Select tables, schemas, or
databases
Let AWS Database Migration
Service (AWS DMS) create
tables, load data, and keep
them in sync
Switch applications over to the
target at your convenience
AWS
DMS
How does it work?
Keep your apps running during the migration
63. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Quick Tour: AWS DMS & SCT
64. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Native migration options
If you’re not switching engines and can take downtime:
• SQL Server: bak file import
• MySQL: read replicas
• Oracle SQL Developer, Data Pump, Export/Import
• PostgreSQL: pg_dump
• SAP ASE: bcp
65. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
When to use DMS and SCT?
Modernize Migrate Replicate
Modernize your Database tier—
• Commercial to open-source
• Commercial to Amazon Aurora
Modernize your Data Warehouse—
• Commercial to Amazon Redshift
• Migrate business-critical applications
• Migrate from Classic to Amazon
Virtual Private Cloud (Amazon VPC)
• Migrate data warehouse to Redshift
• Upgrade to a minor version
• Consolidate shards into Aurora
• Create cross-regions Read Replicas
• Run your analytics in the cloud
• Keep your dev/test and production
environment sync
66. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
AWS DMS & Snowball
67. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Features
Simple, fast, and secure data transfer
1/5 the cost of high-speed internet
Can transfer up to 90 PB of data
AWS Snowball is a petabyte-scale data transport solution
that uses secure appliances to transfer large amounts of
data into and out of the AWS cloud
AWS Snowball
68. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
AWS Snowball
69. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Common use cases
• Migrate large databases (over 5TB)
• Migrate many databases at once
• Migrate over slow network
• Push vs. Pull
Using AWS DMS and Snowball together
70. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Customer premises AWS
Internet
VPN
Migrating DB using AWS Snowball
AWS DMS
Local replication
agent
(through AWS SCT)
Amazon S3
AWS Snowball
71. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Customer premises AWS
Internet
VPN
AWS DMS
Local replication
agent
(through AWS SCT)
Amazon S3
AWS Snowball
Migrating DB using AWS Snowball
72. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Why use DMS and SCT?
Secure
Cost Effective
Remove Barriers
to Entry
Allow DB
Freedom
Keep a Leg in
the Cloud
Easy to Use, but
Sophisticated…
Near-Zero
Downtime
73. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
AWS Database Migration Service Adoption
74. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Database migration playbook
75. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
SCT DMS
Migration
Playbook
Schema Data Best practices
The recipe for successful database migrations
76. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
AWS DMS customers…
77. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
This Is My Architecture: WonderLend Hubs
• Migrating an ISV Solution from MS
SQL to Amazon Aurora Postgres
using AWS DMS & SCT
• Scalability and High Availability at
affordable cost
• Schema
• 500+ tables,
• 90+ Stored procedures
https://aws.amazon.com/this-is-my-architecture/
https://youtu.be/K9N59jiMYvU
78. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Resources
• Amazon Aurora
• https://aws.amazon.com/rds/aurora/details/
• https://aws.amazon.com/rds/aurora/faqs/
• http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html
• http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.BestPractices.
html
• https://d0.awsstatic.com/product-
marketing/Aurora/Aurora_Export_Import_Best_Practices_v1-3.pdf
• AWS DMS
• https://aws.amazon.com/documentation/dms/
79. © 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Thank you