SlideShare ist ein Scribd-Unternehmen logo
1 von 28
Oracle Autonomous
Transaction Processing (ATP):
In Heavy Traffic, Why Drive Stick?
Jim Czuprynski
Consultant
@JimTheWhyGuy
My Credentials
• 40 years of database-centric IT experience
• Oracle DBA since 2001
• Oracle 9i, 10g, 11g, 12c OCP and ADWC
• Oracle ACE Director since 2014
• ODTUG Database Committee Lead
• Editor of ODTUG TechCeleration
• Oracle-centric blog (Generally, It Depends)
• Regular speaker at Oracle OpenWorld, COLLABORATE,
KSCOPE, and international and regional OUGs
E-mail me at jczuprynski@zerodefectcomputing.com
Follow me on Twitter (@JimTheWhyGuy)
Connect with me on LinkedIn (Jim Czuprynski)
Our Agenda
•Autonomous Transaction Processing (ATP)
•Creating, Controlling, and Monitoring an ATP Instance
•Loading Data Into ATP
•Monitoring ATP Performance in Multiple Dimensions
•Demo: How ATP Reacts to Overwhelming Workloads
•Conclusions and References
Moving to Autonomous DB: A Suggested Business Process Flow
Assess
• Is my
application
workload really
ready to move
to ATP?
Plan
•What migration
strategy is most
appropriate?
•How long of an
outage can my
production
application afford?
Migrate
• Transfer data
using chosen
migration
strategy, and
keep it
synchronized
Monitor
• Watch for any
unexpected
service outage /
performance
degradation /
user complaints
Tweak
• Should any
application
workloads shift to
a different ATP
instance service?
As an evolving Oracle Enterprise Data Architect, it’s crucial to recognize
and embrace the main thrust of Autonomous DB:
No More Knobs!
Autonomous Transaction Processing (ATP):
Getting Started
• Creating an ATP Instance
• Loading SOE Schema and Data Into ATP
ATP: Creating a New Instance (1)
Specify your cloud account …
1
… and get logged in
2
Access your Cloud Dashboard, then choose what kind of
instance to create
3 Build a new compartment for your ATP instance …
5
… and check out the other compartments available
5
ATP: Creating a New Instance (2)
Specify a compartment and
administrator credentials …
1
… and ATP instance creation begins!
2
ATP instance now shows up in chosen compartment …
3
… and your first ADW instance is now ready to access
4
ATP: Creating a New Instance (3)
Connect to the new instance
using the ADMIN account …
1
Here’s your first look at the ATP Service Console!
2
Request new credentials for access …
3
… supply a robust password …
4
… and save the
new credentials in
TNSNAMES home
5
SQL> BEGIN
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(
consumer_group => 'HIGH’
,io_megabytes_limit => 10
,elapsed_time_limit => 30
);
END;
/
PL/SQL procedure successfully completed.
Examples of Automatically Provided ATP Database Services
Service Name Usage Parallelism?
Resource
Management Plan
Shares
Concurrency Usage Recommendations
PDBSOE_TPURGENT OLTP Manual 12 Unlimited
Highest priority service aimed at time-
critical OLTP operations
PDBSOE_TP
OLTP 1 8
Unlimited Use for typical (non-time-critical) OLTP
operations
PDBSOE_HIGH Queries CPU_COUNT 4 3 Queries
When the system is under resource
pressure, these sessions will get highest
priority
PDBSOE_MEDIUM Queries 4 2
1.25 x
CPU_COUNT
queries
When the system is under resource
pressure, these sessions receive medium
priority
PDBSOE_LOW Queries 1 1
2 x CPU_COUNT
queries
When the system is under resource
pressure, these sessions receive lowest
priority
See the detailed documentation for complete information on how these database services work.
ATP: Migrating and Loading Data
11
Am I Empowered To … 18c ATP
Load data with SQL*Loader or SQLDeveloper? Yes
Yes, but source files should
reside “nearby” by network
Load data with Data Pump Import? Yes
Yes, but export dump set
resides in object storage
Export data with Data Pump Export? Yes
Yes, but export dump set
resides in object storage
Synchronize data with GoldenGate*? Yes Yes, within certain limits
*See this documentation for complete information on GoldenGate capabilities
for Autonomous Databases.
Using SwingBench to Build and Load the SOE Sample Schema
ATP: Monitoring Performance
in Multiple Dimensions
• Generating a Simple Sample Workload
• Monitoring Performance With the ATP Service Console
• Monitoring Performance with MonitorDB Utility
• Demonstration: Generating a “Nightmare” Workload
ATP: Monitoring Instance and Statement Performance
How is the ATP instance performing right now, and are there any evident “pushbacks” against a running workload?
1 Performance can also be viewed for a particular narrower time period
2
Viewing the performance of running as well as completed individual statements
3
Viewing an individual SQL statement’s performance …
4 … the statement’s execution plan …
5
… and how much parallelism is being consumed
6
ATP: Turning the “Big Red Dial”
Requesting CPU scale-up …
1 Scale-up in progress …
2 … and successful CPU scale-up completed
3
Workload Exhaustion Demonstration:
Five different workloads simultaneously executed against SOE schema
After scale-up, TPURGENT
performance improves …
… the number of executing
statements increases …
… and there’s a decrease
in queued statements
ATP:
Advantages,
Drawbacks,
and Limits
18c vs. ATP: Comparison of Features
18
Am I Empowered To … 18c ATP
Add my own schemas? Yes Yes
Connect applications directly via TNSNAMES? Yes Yes
Elastically upsize or downsize CPUs, memory, and storage? Yes Yes
Create my own CDBs and PDBs? Yes No
Clone a PDB to the same or another CDB? Yes No
Build my own tablespaces? Yes No
Modify memory pool sizes (e.g. SGA_SIZE)? Yes No
Modify security settings (e.g. keystores)? Yes No
Connect directly as SYS? Yes No
Build a PDB using RMAN backups? Yes No
Connect with Enterprise Manager Cloud Control for monitoring?
Via Proxy
Agent
No
ATP: Loading Data Via DBMS_CLOUD.COPY_DATA (1)
Creating credentials for accessing file system:
1
SQL> BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'extb_tpcds'
,username => 'IOUGCloudTrial@ioug.org'
,password => '(;n<T1#-MpY>4u>_yilK'
);
END;
/
Creating the new table:
2
SQL> CREATE TABLE tpcds.customer_credit_ratings (
ccr_customer_number NUMBER(7)
,ccr_last_reported DATE
,ccr_credit_rating NUMBER(5)
,ccr_missed_payments NUMBER(3)
,ccr_credit_maximum NUMBER(7)
)
STORAGE (INITIAL 8M NEXT 4M)
PARTITION BY RANGE (ccr_last_reported)
INTERVAL(NUMTOYMINTERVAL(3, 'MONTH'))
(PARTITION ccr_oldest
VALUES LESS THAN (TO_DATE('1998-04-01', 'yyyy-mm-dd'))
);
Table created.
Loading data with DBMS_CLOUD.COPY_DATA:
3
SQL> BEGIN
DBMS_CLOUD.COPY_DATA(
table_name => 'CUSTOMER_CREDIT_RATINGS'
,credential_name => 'EXTB_TPCDS'
,file_uri_list =>
'https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/iougcloudtrial/
ADWExternalTables/CreditScoring_Current.dat'
,schema_name => 'TPCDS'
,field_list => 'ccr_customer_number CHAR(08),ccr_last_reported CHAR(10)
,ccr_credit_rating CHAR(05),ccr_missed_payments CHAR(03)
,ccr_credit_maximum CHAR(07)’
,format => '{"delimiter" : "|" , "dateformat" : "YYYY-MM-DD"}');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('ERROR:' || SQLCODE || ' ' || SQLERRM);
END;
/
PL/SQL procedure successfully completed.
ATP: Loading Data Via DBMS_CLOUD. COPY_DATA (2)
Monitoring a running load task …
1 … even when it fails to complete successfully!
2
Show status of running
load operations:
3
SET LINESIZE 132
SET PAGESIZE 20000
COL owner_name FORMAT A08 HEADING "Owner"
COL table_name FORMAT A24 HEADING "Table|Loaded"
COL type FORMAT A08 HEADING "Operation"
COL status FORMAT A10 HEADING "Status"
COL start_dtm FORMAT A19 HEADING "Started At"
COL update_dtm FORMAT A19 HEADING "Finished At"
COL logfile_table FORMAT A12 HEADING "LOGFILE|Table"
COL badfile_table FORMAT A12 HEADING "BADFILE|Table"
SELECT
owner_name
,table_name
,type
,status
,TO_CHAR(start_time,'YYYY-MM-DD HH24:MI:SS') start_dtm
,TO_CHAR(update_time,'YYYY-MM-DD HH24:MI:SS') update_dtm
,logfile_table
,badfile_table
FROM user_load_operations
WHERE type = 'COPY'
ORDER BY start_time DESC;
Table LOGFILE BADFILE
Owner Loaded Operatio Status Started At Finished At Table Table
-------- ------------------------ -------- ---------- ------------------- ------------------- ------------ ------------
TPCDS CUSTOMER_CREDIT_RATINGS COPY COMPLETED 2018-10-08 11:00:59 2018-10-08 11:03:12 COPY$38_LOG COPY$38_BAD
TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:51:09 2018-10-08 10:53:16 COPY$37_LOG COPY$37_BAD
TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:50:49 2018-10-08 10:50:49
TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:50:03 2018-10-08 10:50:03
TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:34:33 2018-10-08 10:35:56 COPY$34_LOG COPY$34_BAD
Show the
resulting
LOG File:
4
SQL> SELECT *
FROM copy$38_log;
LOG file opened at 10/08/18 16:03:01
Total Number of Files=1
Data File: https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/iougcloudt
Log File: COPY$38_144722.log
LOG file opened at 10/08/18 16:03:01
Bad File: COPY$38_355882.bad
Field Definitions for table COPY$WQPDD1Q3X2892USR6RY7
Record format DELIMITED BY
Data in file has same endianness as the platform
Rows with all null fields are accepted
Fields in Data Source:
CCR_CUSTOMER_NUMBER CHAR (8)
Terminated by "|"
CCR_LAST_REPORTED CHAR (10)
Date datatype DATE, date mask YYYY-MM-DD
Terminated by "|"
CCR_CREDIT_RATING CHAR (5)
Terminated by "|"
CCR_MISSED_PAYMENTS CHAR (3)
Terminated by "|"
CCR_CREDIT_MAXIMUM CHAR (7)
Terminated by "|"
Date Cache Statistics for table COPY$WQPDD1Q3X2892USR6RY7
Date conversion cache disabled due to overflow (default size: 1000)
ATP: Migrating Data Via DataPump Export and Import
Export data from source database:
1
$> expdp vevo/vevo@pdbvevo parfile=ADW_VEVO.expdp
#####
# File: ADW_VEVO.expdp
# Purpose: DataPump Export parameter file for VEVO schema
# 1.) Exclude all:
# - Clusters
# - Database Links
# - Indexes and Index Types
# - Materialized Views, Logs, and Zone Maps
# 2.) For partitioned tables, unload all table data in a single operation (rather
# than unload each table partition as a separate operation) for faster loading
# 3.) Use 4 degrees of parallelism and write to multiple dump files
#####
DIRECTORY=DATA_PUMP_DIR
EXCLUDE=INDEX, CLUSTER, INDEXTYPE, MATERIALIZED_VIEW, MATERIALIZED_VIEW_LOG,
MATERIALIZED_ZONEMAP, DB_LINK
DATA_OPTIONS=GROUP_PARTITION_TABLE_DATA
PARALLEL=4
SCHEMAS=vevo
DUMPFILE=export%u.dmp
Export: Release 18.0.0.0.0 - Production on Sat Sep 1 19:12:37 2018
Version 18.1.0.0.0
Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 18c EE Extreme Perf Release 18.0.0.0.0 - Production
Starting "VEVO"."SYS_EXPORT_SCHEMA_01": vevo/********@pdbvevo parfile=ADW_VEVO.expdp
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . .
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "VEVO"."T_CANVASSING" 27.77 MB 539821 rows
. . exported "VEVO"."T_STAFF" 13.86 KB 26 rows
. . exported "VEVO"."T_CAMPAIGN_ORG" 8.031 KB 25 rows
. . exported "VEVO"."T_VOTING_RESULTS" 80.90 MB 1887761 rows
. . exported "VEVO"."T_VOTERS" 84.84 MB 180000 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "VEVO"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for VEVO.SYS_EXPORT_SCHEMA_01 is:
/u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export01.dmp
/u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export02.dmp
/u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export03.dmp
/u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export04.dmp
Job "VEVO"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Sep 1 19:13:17 2018 elapsed 0 00:00:39
Transfer export dump set to Object Container
2
Set up credentials for access:
3
SQL> CONNECT admin/'N0M0reKn0bs#@tpcds_high;
SQL> BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => ‘DPI_VEVO’
,username => 'jczuprynski@zerodefectcomputing.com'
,password => 'N0M0reKn0bs#'
);
END;
/
SQL> ALTER DATABASE PROPERTY
SET default_credential = 'ADMIN.DPI_VEVO';
Add new schema into ADW Instance:
4
SQL> CONNECT admin/'N0M0reKn0bs#@tpcds_high;
CREATE USER vevo
IDENTIFIED BY N0M0reKn0bs#
TEMPORARY TABLESPACE temp
PROFILE DEFAULT;
GRANT RESOURCE TO vevo;
GRANT CREATE PROCEDURE TO vevo;
GRANT CREATE PUBLIC SYNONYM TO vevo;
GRANT CREATE SEQUENCE TO vevo;
GRANT CREATE SESSION TO vevo;
GRANT CREATE SYNONYM TO vevo;
GRANT CREATE TABLE TO vevo;
GRANT CREATE VIEW TO vevo;
GRANT DROP PUBLIC SYNONYM TO vevo;
GRANT EXECUTE ANY PROCEDURE TO vevo;
GRANT READ,WRITE ON DIRECTORY data_pump_dir TO vevo;
Import data into ADW instance:
5
$> ./impdp admin/IOUG1sAwesome@TPCDS_HIGH 
DIRECTORY=DATA_PUMP_DIR 
VERSION=18.0.0 
REMAP_SCHEMA=vevo:vevo 
DUMPFILE=default_credential:https://swiftobjectstorage.us-ashburn-
1.oraclecloud.com/v1/iougcloudtrial/DP_VEVO/export%U.dmp 
PARALLEL=4 
PARTITION_OPTIONS=MERGE 
TRANSFORM=SEGMENT_ATTRIBUTES:N 
TRANSFORM=DWCS_CVT_IOTS:Y 
TRANSFORM=CONSTRAINT_USE_DEFAULT_INDEX:Y 
EXCLUDE=index,cluster,indextype,materialized_view,materialized_view_log
,materialized_zonemap,db_link
Import: Release 12.2.0.1.0 - Production on Sun Sep 2 21:51:32 2018
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 18c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "ADMIN"."SYS_IMPORT_FULL_02" successfully loaded/unloaded
Starting "ADMIN"."SYS_IMPORT_FULL_02": admin/********@TPCDS_HIGH DIRECTORY=DATA_PUMP_DIR VERSION=18.0.0
. . .
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "VEVO"."T_STAFF" 13.86 KB 26 rows
. . imported "VEVO"."T_CANVASSING" 27.77 MB 539821 rows
. . imported "VEVO"."T_CAMPAIGN_ORG" 8.031 KB 25 rows
. . imported "VEVO"."T_VOTING_RESULTS" 80.90 MB 1887761 rows
. . imported "VEVO"."T_VOTERS" 84.84 MB 180000 rows
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "ADMIN"."SYS_IMPORT_FULL_02" successfully completed at Mon Sep 3 02:52:42 2018 elapsed 0 00:01:06
ATP: Advantages of “No More Knobs“
Remember, ATP (like ADW) is all about no more knobs … and that’s really advantageous!
• Service instance can be stopped and restarted as necessary
• Useful for conserving Cloud credits
• Easy to connect to
• Only a few entries in SQLNET.ORA file and TNSNAMES.ORA are required
• Regular RMAN backups taken automatically on regular nightly schedule
• No instance tuning required
• Memory pool sizes are already locked in
• Parallelism is automatically derived depending on number of OCPUs and service name selected for connection
• Only appropriate licensing options are included
• No worries about accidently incurring potential additional licensing fees
• Direct-path loads are fully supported
• DataPump Export and Import provides for rapid provisioning from existing databases
• GoldenGate support has been added as well
ATP: Summary of Appropriate Use Cases
ATP is most appropriate for the following application workload requirements and environments:
• Mixed workloads, including OLTP and moderate reporting
• Exadata storage software caches most frequently used database blocks in flash memory on storage cells
• Up to 128 OCPUs and 128 TB of storage can be requested per ATP instance (subject to availability within instance’s region)
• Ideally, OLTP application workload(s) should already be well-tuned to avoid surprises
• Virtually no DBA resources required for database management
• No instance tuning is necessary
• Selection of appropriate database service for the workload is really the only choice required
• Parallelism derived from database service selected and number of OCPUs available
• Scale-up and scale-down requires just a single button push
• Database migration and transformation only limited by desired / appropriate transferal methods
• Fresh load: DBMS_CLOUD.COPY_DATA, SQL*Loader, or INSERT INTO … SELECT FROM an EXTERNAL Table
• Existing schema(s): DataPump Import
• Tight synchronization required: GoldenGate
• Extremely large data transfers possible via Oracle Cloud Infrastructure Data Transfer Appliance
ATP:
Reference Material
• Feature Limitations
• Permitted SQL Commands
• Initialization Parameter Restrictions
• Valuable Blogs, Whitepapers, and References
ATP: Database Feature Limitations
Several database features normally available for an OCI-resident Oracle
database are restricted for ATP instances:
Object / Permission / Feature Restrictions
Tablespaces Cannot be added, removed, or modified
Parallelism Enabled by default and based on number of OCPUs and chosen database service for
application to connect
Compression HCC compression is not enabled by default, but a compression clause will be honored
Result Caching Enabled by default for all statements; cannot be changed
Node File System and OS No direct access permitted
Database Links to Other Databases Prohibited to preserve security features
PL/SQL Calls Using DB Links Likewise, prohibited
Parallel DML Enabled by default, but can be disabled at session level:
ALTER SESSION DISABLE PARALLEL DML;
See Restrictions for Database Features for a complete list of these ATP limitations.
ATP: Permitted Changes to Initialization Parameters
Only the following database initialization parameters may be modified:
Initialization Parameters That Can Be Modified
APPROX_FOR_AGGREGATION OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES*
APPROX_FOR_COUNT_DISTINCT OPTIMIZER_IGNORE_HINTS
APPROX_FOR_PERCENTILE OPTIMIZER_IGNORE_PARALLEL_HINTS
AWR_PDB_AUTOFLUSH_ENABLED PLSCOPE_SETTINGS
PLSQL_DEBUG
PLSQL_WARNINGS
PLSQL_OPTIMIZE_LEVEL
PLSQL_CCFLAGS
Most NLS parameters TIME_ZONE*
* Only via ALTER SESSION
See Restrictions to Database Initialization Parameters for more information on permissible changes.
ATP: Unavailable Options and Packs
The following database options and packs are not enabled for ATP instances:
Non-Usable Database Options and Packs
Oracle Application Express Oracle Spatial and Graph
Oracle Tuning Pack Oracle Data Masking and Subsetting Pack
Oracle Real Application Testing Oracle R capabilities of Oracle Advanced Analytics
Oracle Database Vault Oracle Industry Data Models
Oracle Data Masking and Subsetting Pack Oracle Text
Oracle Database Lifecycle Management Pack Oracle Multimedia
Oracle Cloud Management Pack for Oracle
Database
Java in DB
Oracle OLAP Oracle XML DB
Oracle Workspace Manager Context
See Restrictions for Database Features for complete information on unusable database options and packs.
ATP: Unavailable SQL Commands
The following SQL commands cannot be executed against an ATP instance:
See Restrictions for SQL Commands for complete information on these unavailable SQL commands.
SQL Command Reason for Unavailability
ADMINISTER KEY
MANAGEMENT
PDB-level security tightly enforced
CREATE / ALTER / DROP
TABLESPACE
Tablespaces are strictly controlled
ALTER PROFILE Resource limits and security restraints tightly enforced
CREATE DATABASE LINK Self-containment and security
CREATE INDEX [BITMAP] BITMAP indexes are not permitted
Useful Resources and Documentation
• ATP Documentation:
https://docs.oracle.com/en/cloud/paas/atp-cloud/index.html
• Dominic Giles’s Blog on Setting Up SwingBench for ATP:
http://www.dominicgiles.com/blog/files/c84a63640d52961fc28f750570888cdc-169.html
• Oracle Autonomous and Secure Cloud Services Blog:
https://blogs.oracle.com/autonomous-and-secure-cloud-services
• Maria Colgan on What to Expect From ATP Cloud:
https://blogs.oracle.com/what-to-expect-from-oracle-autonomous-transaction-processing
https://blogs.oracle.com/what-to-expect-from-oracle-autonomous-transaction-processing-cloud

Weitere ähnliche Inhalte

Was ist angesagt?

Research on vector spatial data storage scheme based
Research on vector spatial data storage scheme basedResearch on vector spatial data storage scheme based
Research on vector spatial data storage scheme based
Anant Kumar
 

Was ist angesagt? (20)

A Step by Step Introduction to the MySQL Document Store
A Step by Step Introduction to the MySQL Document StoreA Step by Step Introduction to the MySQL Document Store
A Step by Step Introduction to the MySQL Document Store
 
NOSQL Overview
NOSQL OverviewNOSQL Overview
NOSQL Overview
 
Graph databases and the Panama Papers - Stefan Armbruster - Codemotion Milan ...
Graph databases and the Panama Papers - Stefan Armbruster - Codemotion Milan ...Graph databases and the Panama Papers - Stefan Armbruster - Codemotion Milan ...
Graph databases and the Panama Papers - Stefan Armbruster - Codemotion Milan ...
 
Research on vector spatial data storage scheme based
Research on vector spatial data storage scheme basedResearch on vector spatial data storage scheme based
Research on vector spatial data storage scheme based
 
MongoDB .local London 2019: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local London 2019: MongoDB Atlas Data Lake Technical Deep DiveMongoDB .local London 2019: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local London 2019: MongoDB Atlas Data Lake Technical Deep Dive
 
An Autonomous Singularity Approaches: Force Multipliers For Overwhelmed DBAs
An Autonomous Singularity Approaches: Force Multipliers For Overwhelmed DBAsAn Autonomous Singularity Approaches: Force Multipliers For Overwhelmed DBAs
An Autonomous Singularity Approaches: Force Multipliers For Overwhelmed DBAs
 
Building Applications with a Graph Database
Building Applications with a Graph DatabaseBuilding Applications with a Graph Database
Building Applications with a Graph Database
 
NoSql Databases
NoSql DatabasesNoSql Databases
NoSql Databases
 
Choosing the right NOSQL database
Choosing the right NOSQL databaseChoosing the right NOSQL database
Choosing the right NOSQL database
 
Database basics for new-ish developers -- All Things Open October 18th 2021
Database basics for new-ish developers  -- All Things Open October 18th 2021Database basics for new-ish developers  -- All Things Open October 18th 2021
Database basics for new-ish developers -- All Things Open October 18th 2021
 
Introduction to SQL Server Internals: How to Think Like the Engine
Introduction to SQL Server Internals: How to Think Like the EngineIntroduction to SQL Server Internals: How to Think Like the Engine
Introduction to SQL Server Internals: How to Think Like the Engine
 
Introduction to NoSQL
Introduction to NoSQLIntroduction to NoSQL
Introduction to NoSQL
 
SQL Query Optimization: Why Is It So Hard to Get Right?
SQL Query Optimization: Why Is It So Hard to Get Right?SQL Query Optimization: Why Is It So Hard to Get Right?
SQL Query Optimization: Why Is It So Hard to Get Right?
 
MongoDB Europe 2016 - Using MongoDB to Build a Fast and Scalable Content Repo...
MongoDB Europe 2016 - Using MongoDB to Build a Fast and Scalable Content Repo...MongoDB Europe 2016 - Using MongoDB to Build a Fast and Scalable Content Repo...
MongoDB Europe 2016 - Using MongoDB to Build a Fast and Scalable Content Repo...
 
Webinar: Choosing the Right Shard Key for High Performance and Scale
Webinar: Choosing the Right Shard Key for High Performance and ScaleWebinar: Choosing the Right Shard Key for High Performance and Scale
Webinar: Choosing the Right Shard Key for High Performance and Scale
 
Apache Druid Vision and Roadmap
Apache Druid Vision and RoadmapApache Druid Vision and Roadmap
Apache Druid Vision and Roadmap
 
What's Your Super-Power? Mine is Machine Learning with Oracle Autonomous DB.
What's Your Super-Power? Mine is Machine Learning with Oracle Autonomous DB.What's Your Super-Power? Mine is Machine Learning with Oracle Autonomous DB.
What's Your Super-Power? Mine is Machine Learning with Oracle Autonomous DB.
 
JSON, A Splash of SODA, and a SQL Chaser: Real-World Use Cases for Autonomous...
JSON, A Splash of SODA, and a SQL Chaser: Real-World Use Cases for Autonomous...JSON, A Splash of SODA, and a SQL Chaser: Real-World Use Cases for Autonomous...
JSON, A Splash of SODA, and a SQL Chaser: Real-World Use Cases for Autonomous...
 
August meetup - All about Apache Druid
August meetup - All about Apache Druid August meetup - All about Apache Druid
August meetup - All about Apache Druid
 
Meetup070416 Presentations
Meetup070416 PresentationsMeetup070416 Presentations
Meetup070416 Presentations
 

Ähnlich wie Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick?

Secrets of highly_avail_oltp_archs
Secrets of highly_avail_oltp_archsSecrets of highly_avail_oltp_archs
Secrets of highly_avail_oltp_archs
Tarik Essawi
 
Tony Jambu (obscure) tools of the trade for tuning oracle sq ls
Tony Jambu   (obscure) tools of the trade for tuning oracle sq lsTony Jambu   (obscure) tools of the trade for tuning oracle sq ls
Tony Jambu (obscure) tools of the trade for tuning oracle sq ls
InSync Conference
 
Advance Sql Server Store procedure Presentation
Advance Sql Server Store procedure PresentationAdvance Sql Server Store procedure Presentation
Advance Sql Server Store procedure Presentation
Amin Uddin
 
Ssis Best Practices Israel Bi U Ser Group Itay Braun
Ssis Best Practices   Israel Bi U Ser Group   Itay BraunSsis Best Practices   Israel Bi U Ser Group   Itay Braun
Ssis Best Practices Israel Bi U Ser Group Itay Braun
sqlserver.co.il
 

Ähnlich wie Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick? (20)

Expanding your impact with programmability in the data center
Expanding your impact with programmability in the data centerExpanding your impact with programmability in the data center
Expanding your impact with programmability in the data center
 
How should I monitor my idaa
How should I monitor my idaaHow should I monitor my idaa
How should I monitor my idaa
 
Sherlock holmes for dba’s
Sherlock holmes for dba’sSherlock holmes for dba’s
Sherlock holmes for dba’s
 
Sql saturday oc 2019
Sql saturday oc 2019Sql saturday oc 2019
Sql saturday oc 2019
 
In-memory ColumnStore Index
In-memory ColumnStore IndexIn-memory ColumnStore Index
In-memory ColumnStore Index
 
OOW13 Exadata and ODI with Parallel
OOW13 Exadata and ODI with ParallelOOW13 Exadata and ODI with Parallel
OOW13 Exadata and ODI with Parallel
 
IBM Insight 2013 - Aetna's production experience using IBM DB2 Analytics Acce...
IBM Insight 2013 - Aetna's production experience using IBM DB2 Analytics Acce...IBM Insight 2013 - Aetna's production experience using IBM DB2 Analytics Acce...
IBM Insight 2013 - Aetna's production experience using IBM DB2 Analytics Acce...
 
Application Performance Troubleshooting 1x1 - Part 2 - Noch mehr Schweine und...
Application Performance Troubleshooting 1x1 - Part 2 - Noch mehr Schweine und...Application Performance Troubleshooting 1x1 - Part 2 - Noch mehr Schweine und...
Application Performance Troubleshooting 1x1 - Part 2 - Noch mehr Schweine und...
 
Top 5 things to know about sql azure for developers
Top 5 things to know about sql azure for developersTop 5 things to know about sql azure for developers
Top 5 things to know about sql azure for developers
 
6 tips for improving ruby performance
6 tips for improving ruby performance6 tips for improving ruby performance
6 tips for improving ruby performance
 
Secrets of highly_avail_oltp_archs
Secrets of highly_avail_oltp_archsSecrets of highly_avail_oltp_archs
Secrets of highly_avail_oltp_archs
 
Tony Jambu (obscure) tools of the trade for tuning oracle sq ls
Tony Jambu   (obscure) tools of the trade for tuning oracle sq lsTony Jambu   (obscure) tools of the trade for tuning oracle sq ls
Tony Jambu (obscure) tools of the trade for tuning oracle sq ls
 
GLOC 2014 NEOOUG - Oracle Database 12c New Features
GLOC 2014 NEOOUG - Oracle Database 12c New FeaturesGLOC 2014 NEOOUG - Oracle Database 12c New Features
GLOC 2014 NEOOUG - Oracle Database 12c New Features
 
Best Practices for Building Robust Data Platform with Apache Spark and Delta
Best Practices for Building Robust Data Platform with Apache Spark and DeltaBest Practices for Building Robust Data Platform with Apache Spark and Delta
Best Practices for Building Robust Data Platform with Apache Spark and Delta
 
Docker Logging and analysing with Elastic Stack - Jakub Hajek
Docker Logging and analysing with Elastic Stack - Jakub Hajek Docker Logging and analysing with Elastic Stack - Jakub Hajek
Docker Logging and analysing with Elastic Stack - Jakub Hajek
 
Docker Logging and analysing with Elastic Stack
Docker Logging and analysing with Elastic StackDocker Logging and analysing with Elastic Stack
Docker Logging and analysing with Elastic Stack
 
Advance Sql Server Store procedure Presentation
Advance Sql Server Store procedure PresentationAdvance Sql Server Store procedure Presentation
Advance Sql Server Store procedure Presentation
 
Ssis Best Practices Israel Bi U Ser Group Itay Braun
Ssis Best Practices   Israel Bi U Ser Group   Itay BraunSsis Best Practices   Israel Bi U Ser Group   Itay Braun
Ssis Best Practices Israel Bi U Ser Group Itay Braun
 
MLflow at Company Scale
MLflow at Company ScaleMLflow at Company Scale
MLflow at Company Scale
 
Tips Tricks and Little known features in SAP ASE
Tips Tricks and Little known features in SAP ASETips Tricks and Little known features in SAP ASE
Tips Tricks and Little known features in SAP ASE
 

Mehr von Jim Czuprynski

Mehr von Jim Czuprynski (12)

From DBA to DE: Becoming a Data Engineer
From DBA to DE:  Becoming a Data Engineer From DBA to DE:  Becoming a Data Engineer
From DBA to DE: Becoming a Data Engineer
 
Going Native: Leveraging the New JSON Native Datatype in Oracle 21c
Going Native: Leveraging the New JSON Native Datatype in Oracle 21cGoing Native: Leveraging the New JSON Native Datatype in Oracle 21c
Going Native: Leveraging the New JSON Native Datatype in Oracle 21c
 
Access Denied: Real-World Use Cases for APEX and Real Application Security
Access Denied: Real-World Use Cases for APEX and Real Application SecurityAccess Denied: Real-World Use Cases for APEX and Real Application Security
Access Denied: Real-World Use Cases for APEX and Real Application Security
 
Charge Me Up! Using Oracle ML, Analytics, and APEX For Finding Optimal Charge...
Charge Me Up! Using Oracle ML, Analytics, and APEX For Finding Optimal Charge...Charge Me Up! Using Oracle ML, Analytics, and APEX For Finding Optimal Charge...
Charge Me Up! Using Oracle ML, Analytics, and APEX For Finding Optimal Charge...
 
Graphing Grifters: Identify & Display Patterns of Corruption With Oracle Graph
Graphing Grifters: Identify & Display Patterns of Corruption With Oracle GraphGraphing Grifters: Identify & Display Patterns of Corruption With Oracle Graph
Graphing Grifters: Identify & Display Patterns of Corruption With Oracle Graph
 
So an Airline Pilot, a Urologist, and an IT Technologist Walk Into a Bar: Thi...
So an Airline Pilot, a Urologist, and an IT Technologist Walk Into a Bar: Thi...So an Airline Pilot, a Urologist, and an IT Technologist Walk Into a Bar: Thi...
So an Airline Pilot, a Urologist, and an IT Technologist Walk Into a Bar: Thi...
 
Politics Ain’t Beanbag: Using APEX, ML, and GeoCoding In a Modern Election Ca...
Politics Ain’t Beanbag: Using APEX, ML, and GeoCoding In a Modern Election Ca...Politics Ain’t Beanbag: Using APEX, ML, and GeoCoding In a Modern Election Ca...
Politics Ain’t Beanbag: Using APEX, ML, and GeoCoding In a Modern Election Ca...
 
One Less Thing For DBAs to Worry About: Automatic Indexing
One Less Thing For DBAs to Worry About: Automatic IndexingOne Less Thing For DBAs to Worry About: Automatic Indexing
One Less Thing For DBAs to Worry About: Automatic Indexing
 
Keep Your Code Low, Low, Low, Low, Low: Getting to Digitally Driven With Orac...
Keep Your Code Low, Low, Low, Low, Low: Getting to Digitally Driven With Orac...Keep Your Code Low, Low, Low, Low, Low: Getting to Digitally Driven With Orac...
Keep Your Code Low, Low, Low, Low, Low: Getting to Digitally Driven With Orac...
 
Cluster, Classify, Associate, Regress: Satisfy Your Inner Data Scientist with...
Cluster, Classify, Associate, Regress: Satisfy Your Inner Data Scientist with...Cluster, Classify, Associate, Regress: Satisfy Your Inner Data Scientist with...
Cluster, Classify, Associate, Regress: Satisfy Your Inner Data Scientist with...
 
Where the %$#^ Is Everybody? Geospatial Solutions For Oracle APEX
Where the %$#^ Is Everybody? Geospatial Solutions For Oracle APEXWhere the %$#^ Is Everybody? Geospatial Solutions For Oracle APEX
Where the %$#^ Is Everybody? Geospatial Solutions For Oracle APEX
 
Fast and Furious: Handling Edge Computing Data With Oracle 19c Fast Ingest an...
Fast and Furious: Handling Edge Computing Data With Oracle 19c Fast Ingest an...Fast and Furious: Handling Edge Computing Data With Oracle 19c Fast Ingest an...
Fast and Furious: Handling Edge Computing Data With Oracle 19c Fast Ingest an...
 

Kürzlich hochgeladen

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 

Kürzlich hochgeladen (20)

From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 

Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick?

  • 1. Oracle Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick? Jim Czuprynski Consultant @JimTheWhyGuy
  • 2. My Credentials • 40 years of database-centric IT experience • Oracle DBA since 2001 • Oracle 9i, 10g, 11g, 12c OCP and ADWC • Oracle ACE Director since 2014 • ODTUG Database Committee Lead • Editor of ODTUG TechCeleration • Oracle-centric blog (Generally, It Depends) • Regular speaker at Oracle OpenWorld, COLLABORATE, KSCOPE, and international and regional OUGs E-mail me at jczuprynski@zerodefectcomputing.com Follow me on Twitter (@JimTheWhyGuy) Connect with me on LinkedIn (Jim Czuprynski)
  • 3. Our Agenda •Autonomous Transaction Processing (ATP) •Creating, Controlling, and Monitoring an ATP Instance •Loading Data Into ATP •Monitoring ATP Performance in Multiple Dimensions •Demo: How ATP Reacts to Overwhelming Workloads •Conclusions and References
  • 4. Moving to Autonomous DB: A Suggested Business Process Flow Assess • Is my application workload really ready to move to ATP? Plan •What migration strategy is most appropriate? •How long of an outage can my production application afford? Migrate • Transfer data using chosen migration strategy, and keep it synchronized Monitor • Watch for any unexpected service outage / performance degradation / user complaints Tweak • Should any application workloads shift to a different ATP instance service? As an evolving Oracle Enterprise Data Architect, it’s crucial to recognize and embrace the main thrust of Autonomous DB: No More Knobs!
  • 5. Autonomous Transaction Processing (ATP): Getting Started • Creating an ATP Instance • Loading SOE Schema and Data Into ATP
  • 6. ATP: Creating a New Instance (1) Specify your cloud account … 1 … and get logged in 2 Access your Cloud Dashboard, then choose what kind of instance to create 3 Build a new compartment for your ATP instance … 5 … and check out the other compartments available 5
  • 7. ATP: Creating a New Instance (2) Specify a compartment and administrator credentials … 1 … and ATP instance creation begins! 2 ATP instance now shows up in chosen compartment … 3 … and your first ADW instance is now ready to access 4
  • 8. ATP: Creating a New Instance (3) Connect to the new instance using the ADMIN account … 1 Here’s your first look at the ATP Service Console! 2 Request new credentials for access … 3 … supply a robust password … 4 … and save the new credentials in TNSNAMES home 5
  • 9. SQL> BEGIN CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE( consumer_group => 'HIGH’ ,io_megabytes_limit => 10 ,elapsed_time_limit => 30 ); END; / PL/SQL procedure successfully completed.
  • 10. Examples of Automatically Provided ATP Database Services Service Name Usage Parallelism? Resource Management Plan Shares Concurrency Usage Recommendations PDBSOE_TPURGENT OLTP Manual 12 Unlimited Highest priority service aimed at time- critical OLTP operations PDBSOE_TP OLTP 1 8 Unlimited Use for typical (non-time-critical) OLTP operations PDBSOE_HIGH Queries CPU_COUNT 4 3 Queries When the system is under resource pressure, these sessions will get highest priority PDBSOE_MEDIUM Queries 4 2 1.25 x CPU_COUNT queries When the system is under resource pressure, these sessions receive medium priority PDBSOE_LOW Queries 1 1 2 x CPU_COUNT queries When the system is under resource pressure, these sessions receive lowest priority See the detailed documentation for complete information on how these database services work.
  • 11. ATP: Migrating and Loading Data 11 Am I Empowered To … 18c ATP Load data with SQL*Loader or SQLDeveloper? Yes Yes, but source files should reside “nearby” by network Load data with Data Pump Import? Yes Yes, but export dump set resides in object storage Export data with Data Pump Export? Yes Yes, but export dump set resides in object storage Synchronize data with GoldenGate*? Yes Yes, within certain limits *See this documentation for complete information on GoldenGate capabilities for Autonomous Databases.
  • 12. Using SwingBench to Build and Load the SOE Sample Schema
  • 13. ATP: Monitoring Performance in Multiple Dimensions • Generating a Simple Sample Workload • Monitoring Performance With the ATP Service Console • Monitoring Performance with MonitorDB Utility • Demonstration: Generating a “Nightmare” Workload
  • 14. ATP: Monitoring Instance and Statement Performance How is the ATP instance performing right now, and are there any evident “pushbacks” against a running workload? 1 Performance can also be viewed for a particular narrower time period 2 Viewing the performance of running as well as completed individual statements 3 Viewing an individual SQL statement’s performance … 4 … the statement’s execution plan … 5 … and how much parallelism is being consumed 6
  • 15. ATP: Turning the “Big Red Dial” Requesting CPU scale-up … 1 Scale-up in progress … 2 … and successful CPU scale-up completed 3 Workload Exhaustion Demonstration: Five different workloads simultaneously executed against SOE schema After scale-up, TPURGENT performance improves … … the number of executing statements increases … … and there’s a decrease in queued statements
  • 17. 18c vs. ATP: Comparison of Features 18 Am I Empowered To … 18c ATP Add my own schemas? Yes Yes Connect applications directly via TNSNAMES? Yes Yes Elastically upsize or downsize CPUs, memory, and storage? Yes Yes Create my own CDBs and PDBs? Yes No Clone a PDB to the same or another CDB? Yes No Build my own tablespaces? Yes No Modify memory pool sizes (e.g. SGA_SIZE)? Yes No Modify security settings (e.g. keystores)? Yes No Connect directly as SYS? Yes No Build a PDB using RMAN backups? Yes No Connect with Enterprise Manager Cloud Control for monitoring? Via Proxy Agent No
  • 18. ATP: Loading Data Via DBMS_CLOUD.COPY_DATA (1) Creating credentials for accessing file system: 1 SQL> BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => 'extb_tpcds' ,username => 'IOUGCloudTrial@ioug.org' ,password => '(;n<T1#-MpY>4u>_yilK' ); END; / Creating the new table: 2 SQL> CREATE TABLE tpcds.customer_credit_ratings ( ccr_customer_number NUMBER(7) ,ccr_last_reported DATE ,ccr_credit_rating NUMBER(5) ,ccr_missed_payments NUMBER(3) ,ccr_credit_maximum NUMBER(7) ) STORAGE (INITIAL 8M NEXT 4M) PARTITION BY RANGE (ccr_last_reported) INTERVAL(NUMTOYMINTERVAL(3, 'MONTH')) (PARTITION ccr_oldest VALUES LESS THAN (TO_DATE('1998-04-01', 'yyyy-mm-dd')) ); Table created. Loading data with DBMS_CLOUD.COPY_DATA: 3 SQL> BEGIN DBMS_CLOUD.COPY_DATA( table_name => 'CUSTOMER_CREDIT_RATINGS' ,credential_name => 'EXTB_TPCDS' ,file_uri_list => 'https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/iougcloudtrial/ ADWExternalTables/CreditScoring_Current.dat' ,schema_name => 'TPCDS' ,field_list => 'ccr_customer_number CHAR(08),ccr_last_reported CHAR(10) ,ccr_credit_rating CHAR(05),ccr_missed_payments CHAR(03) ,ccr_credit_maximum CHAR(07)’ ,format => '{"delimiter" : "|" , "dateformat" : "YYYY-MM-DD"}'); EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('ERROR:' || SQLCODE || ' ' || SQLERRM); END; / PL/SQL procedure successfully completed.
  • 19. ATP: Loading Data Via DBMS_CLOUD. COPY_DATA (2) Monitoring a running load task … 1 … even when it fails to complete successfully! 2 Show status of running load operations: 3 SET LINESIZE 132 SET PAGESIZE 20000 COL owner_name FORMAT A08 HEADING "Owner" COL table_name FORMAT A24 HEADING "Table|Loaded" COL type FORMAT A08 HEADING "Operation" COL status FORMAT A10 HEADING "Status" COL start_dtm FORMAT A19 HEADING "Started At" COL update_dtm FORMAT A19 HEADING "Finished At" COL logfile_table FORMAT A12 HEADING "LOGFILE|Table" COL badfile_table FORMAT A12 HEADING "BADFILE|Table" SELECT owner_name ,table_name ,type ,status ,TO_CHAR(start_time,'YYYY-MM-DD HH24:MI:SS') start_dtm ,TO_CHAR(update_time,'YYYY-MM-DD HH24:MI:SS') update_dtm ,logfile_table ,badfile_table FROM user_load_operations WHERE type = 'COPY' ORDER BY start_time DESC; Table LOGFILE BADFILE Owner Loaded Operatio Status Started At Finished At Table Table -------- ------------------------ -------- ---------- ------------------- ------------------- ------------ ------------ TPCDS CUSTOMER_CREDIT_RATINGS COPY COMPLETED 2018-10-08 11:00:59 2018-10-08 11:03:12 COPY$38_LOG COPY$38_BAD TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:51:09 2018-10-08 10:53:16 COPY$37_LOG COPY$37_BAD TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:50:49 2018-10-08 10:50:49 TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:50:03 2018-10-08 10:50:03 TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:34:33 2018-10-08 10:35:56 COPY$34_LOG COPY$34_BAD Show the resulting LOG File: 4 SQL> SELECT * FROM copy$38_log; LOG file opened at 10/08/18 16:03:01 Total Number of Files=1 Data File: https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/iougcloudt Log File: COPY$38_144722.log LOG file opened at 10/08/18 16:03:01 Bad File: COPY$38_355882.bad Field Definitions for table COPY$WQPDD1Q3X2892USR6RY7 Record format DELIMITED BY Data in file has same endianness as the platform Rows with all null fields are accepted Fields in Data Source: CCR_CUSTOMER_NUMBER CHAR (8) Terminated by "|" CCR_LAST_REPORTED CHAR (10) Date datatype DATE, date mask YYYY-MM-DD Terminated by "|" CCR_CREDIT_RATING CHAR (5) Terminated by "|" CCR_MISSED_PAYMENTS CHAR (3) Terminated by "|" CCR_CREDIT_MAXIMUM CHAR (7) Terminated by "|" Date Cache Statistics for table COPY$WQPDD1Q3X2892USR6RY7 Date conversion cache disabled due to overflow (default size: 1000)
  • 20. ATP: Migrating Data Via DataPump Export and Import Export data from source database: 1 $> expdp vevo/vevo@pdbvevo parfile=ADW_VEVO.expdp ##### # File: ADW_VEVO.expdp # Purpose: DataPump Export parameter file for VEVO schema # 1.) Exclude all: # - Clusters # - Database Links # - Indexes and Index Types # - Materialized Views, Logs, and Zone Maps # 2.) For partitioned tables, unload all table data in a single operation (rather # than unload each table partition as a separate operation) for faster loading # 3.) Use 4 degrees of parallelism and write to multiple dump files ##### DIRECTORY=DATA_PUMP_DIR EXCLUDE=INDEX, CLUSTER, INDEXTYPE, MATERIALIZED_VIEW, MATERIALIZED_VIEW_LOG, MATERIALIZED_ZONEMAP, DB_LINK DATA_OPTIONS=GROUP_PARTITION_TABLE_DATA PARALLEL=4 SCHEMAS=vevo DUMPFILE=export%u.dmp Export: Release 18.0.0.0.0 - Production on Sat Sep 1 19:12:37 2018 Version 18.1.0.0.0 Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 18c EE Extreme Perf Release 18.0.0.0.0 - Production Starting "VEVO"."SYS_EXPORT_SCHEMA_01": vevo/********@pdbvevo parfile=ADW_VEVO.expdp Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . . Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/STATISTICS/MARKER . . exported "VEVO"."T_CANVASSING" 27.77 MB 539821 rows . . exported "VEVO"."T_STAFF" 13.86 KB 26 rows . . exported "VEVO"."T_CAMPAIGN_ORG" 8.031 KB 25 rows . . exported "VEVO"."T_VOTING_RESULTS" 80.90 MB 1887761 rows . . exported "VEVO"."T_VOTERS" 84.84 MB 180000 rows ORA-39173: Encrypted data has been stored unencrypted in dump file set. Master table "VEVO"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ****************************************************************************** Dump file set for VEVO.SYS_EXPORT_SCHEMA_01 is: /u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export01.dmp /u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export02.dmp /u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export03.dmp /u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export04.dmp Job "VEVO"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Sep 1 19:13:17 2018 elapsed 0 00:00:39 Transfer export dump set to Object Container 2 Set up credentials for access: 3 SQL> CONNECT admin/'N0M0reKn0bs#@tpcds_high; SQL> BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => ‘DPI_VEVO’ ,username => 'jczuprynski@zerodefectcomputing.com' ,password => 'N0M0reKn0bs#' ); END; / SQL> ALTER DATABASE PROPERTY SET default_credential = 'ADMIN.DPI_VEVO'; Add new schema into ADW Instance: 4 SQL> CONNECT admin/'N0M0reKn0bs#@tpcds_high; CREATE USER vevo IDENTIFIED BY N0M0reKn0bs# TEMPORARY TABLESPACE temp PROFILE DEFAULT; GRANT RESOURCE TO vevo; GRANT CREATE PROCEDURE TO vevo; GRANT CREATE PUBLIC SYNONYM TO vevo; GRANT CREATE SEQUENCE TO vevo; GRANT CREATE SESSION TO vevo; GRANT CREATE SYNONYM TO vevo; GRANT CREATE TABLE TO vevo; GRANT CREATE VIEW TO vevo; GRANT DROP PUBLIC SYNONYM TO vevo; GRANT EXECUTE ANY PROCEDURE TO vevo; GRANT READ,WRITE ON DIRECTORY data_pump_dir TO vevo; Import data into ADW instance: 5 $> ./impdp admin/IOUG1sAwesome@TPCDS_HIGH DIRECTORY=DATA_PUMP_DIR VERSION=18.0.0 REMAP_SCHEMA=vevo:vevo DUMPFILE=default_credential:https://swiftobjectstorage.us-ashburn- 1.oraclecloud.com/v1/iougcloudtrial/DP_VEVO/export%U.dmp PARALLEL=4 PARTITION_OPTIONS=MERGE TRANSFORM=SEGMENT_ATTRIBUTES:N TRANSFORM=DWCS_CVT_IOTS:Y TRANSFORM=CONSTRAINT_USE_DEFAULT_INDEX:Y EXCLUDE=index,cluster,indextype,materialized_view,materialized_view_log ,materialized_zonemap,db_link Import: Release 12.2.0.1.0 - Production on Sun Sep 2 21:51:32 2018 Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 18c Enterprise Edition Release 12.2.0.1.0 - 64bit Production Master table "ADMIN"."SYS_IMPORT_FULL_02" successfully loaded/unloaded Starting "ADMIN"."SYS_IMPORT_FULL_02": admin/********@TPCDS_HIGH DIRECTORY=DATA_PUMP_DIR VERSION=18.0.0 . . . Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . imported "VEVO"."T_STAFF" 13.86 KB 26 rows . . imported "VEVO"."T_CANVASSING" 27.77 MB 539821 rows . . imported "VEVO"."T_CAMPAIGN_ORG" 8.031 KB 25 rows . . imported "VEVO"."T_VOTING_RESULTS" 80.90 MB 1887761 rows . . imported "VEVO"."T_VOTERS" 84.84 MB 180000 rows Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type SCHEMA_EXPORT/STATISTICS/MARKER Job "ADMIN"."SYS_IMPORT_FULL_02" successfully completed at Mon Sep 3 02:52:42 2018 elapsed 0 00:01:06
  • 21. ATP: Advantages of “No More Knobs“ Remember, ATP (like ADW) is all about no more knobs … and that’s really advantageous! • Service instance can be stopped and restarted as necessary • Useful for conserving Cloud credits • Easy to connect to • Only a few entries in SQLNET.ORA file and TNSNAMES.ORA are required • Regular RMAN backups taken automatically on regular nightly schedule • No instance tuning required • Memory pool sizes are already locked in • Parallelism is automatically derived depending on number of OCPUs and service name selected for connection • Only appropriate licensing options are included • No worries about accidently incurring potential additional licensing fees • Direct-path loads are fully supported • DataPump Export and Import provides for rapid provisioning from existing databases • GoldenGate support has been added as well
  • 22. ATP: Summary of Appropriate Use Cases ATP is most appropriate for the following application workload requirements and environments: • Mixed workloads, including OLTP and moderate reporting • Exadata storage software caches most frequently used database blocks in flash memory on storage cells • Up to 128 OCPUs and 128 TB of storage can be requested per ATP instance (subject to availability within instance’s region) • Ideally, OLTP application workload(s) should already be well-tuned to avoid surprises • Virtually no DBA resources required for database management • No instance tuning is necessary • Selection of appropriate database service for the workload is really the only choice required • Parallelism derived from database service selected and number of OCPUs available • Scale-up and scale-down requires just a single button push • Database migration and transformation only limited by desired / appropriate transferal methods • Fresh load: DBMS_CLOUD.COPY_DATA, SQL*Loader, or INSERT INTO … SELECT FROM an EXTERNAL Table • Existing schema(s): DataPump Import • Tight synchronization required: GoldenGate • Extremely large data transfers possible via Oracle Cloud Infrastructure Data Transfer Appliance
  • 23. ATP: Reference Material • Feature Limitations • Permitted SQL Commands • Initialization Parameter Restrictions • Valuable Blogs, Whitepapers, and References
  • 24. ATP: Database Feature Limitations Several database features normally available for an OCI-resident Oracle database are restricted for ATP instances: Object / Permission / Feature Restrictions Tablespaces Cannot be added, removed, or modified Parallelism Enabled by default and based on number of OCPUs and chosen database service for application to connect Compression HCC compression is not enabled by default, but a compression clause will be honored Result Caching Enabled by default for all statements; cannot be changed Node File System and OS No direct access permitted Database Links to Other Databases Prohibited to preserve security features PL/SQL Calls Using DB Links Likewise, prohibited Parallel DML Enabled by default, but can be disabled at session level: ALTER SESSION DISABLE PARALLEL DML; See Restrictions for Database Features for a complete list of these ATP limitations.
  • 25. ATP: Permitted Changes to Initialization Parameters Only the following database initialization parameters may be modified: Initialization Parameters That Can Be Modified APPROX_FOR_AGGREGATION OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES* APPROX_FOR_COUNT_DISTINCT OPTIMIZER_IGNORE_HINTS APPROX_FOR_PERCENTILE OPTIMIZER_IGNORE_PARALLEL_HINTS AWR_PDB_AUTOFLUSH_ENABLED PLSCOPE_SETTINGS PLSQL_DEBUG PLSQL_WARNINGS PLSQL_OPTIMIZE_LEVEL PLSQL_CCFLAGS Most NLS parameters TIME_ZONE* * Only via ALTER SESSION See Restrictions to Database Initialization Parameters for more information on permissible changes.
  • 26. ATP: Unavailable Options and Packs The following database options and packs are not enabled for ATP instances: Non-Usable Database Options and Packs Oracle Application Express Oracle Spatial and Graph Oracle Tuning Pack Oracle Data Masking and Subsetting Pack Oracle Real Application Testing Oracle R capabilities of Oracle Advanced Analytics Oracle Database Vault Oracle Industry Data Models Oracle Data Masking and Subsetting Pack Oracle Text Oracle Database Lifecycle Management Pack Oracle Multimedia Oracle Cloud Management Pack for Oracle Database Java in DB Oracle OLAP Oracle XML DB Oracle Workspace Manager Context See Restrictions for Database Features for complete information on unusable database options and packs.
  • 27. ATP: Unavailable SQL Commands The following SQL commands cannot be executed against an ATP instance: See Restrictions for SQL Commands for complete information on these unavailable SQL commands. SQL Command Reason for Unavailability ADMINISTER KEY MANAGEMENT PDB-level security tightly enforced CREATE / ALTER / DROP TABLESPACE Tablespaces are strictly controlled ALTER PROFILE Resource limits and security restraints tightly enforced CREATE DATABASE LINK Self-containment and security CREATE INDEX [BITMAP] BITMAP indexes are not permitted
  • 28. Useful Resources and Documentation • ATP Documentation: https://docs.oracle.com/en/cloud/paas/atp-cloud/index.html • Dominic Giles’s Blog on Setting Up SwingBench for ATP: http://www.dominicgiles.com/blog/files/c84a63640d52961fc28f750570888cdc-169.html • Oracle Autonomous and Secure Cloud Services Blog: https://blogs.oracle.com/autonomous-and-secure-cloud-services • Maria Colgan on What to Expect From ATP Cloud: https://blogs.oracle.com/what-to-expect-from-oracle-autonomous-transaction-processing https://blogs.oracle.com/what-to-expect-from-oracle-autonomous-transaction-processing-cloud

Hinweis der Redaktion

  1. The basic characteristics of these consumer groups are: TPURGENT The highest priority application connection service for time critical transaction processing operations. This connection service supports manual parallelism. TP This is the typical application connection service for transaction processing operations. This connection service does not run with parallelism. HIGH Sessions connected to the High database service get the highest priority when the system is under resource pressure. Queries run serially unless you specify a parallel degree through a session parameter, using a statement hint, or by specifying a parallel degree on the underlying tables. MEDIUM Sessions connected to the Medium database service get medium priority when the system is under resource pressure. Queries run serially unless you specify a parallel degree through a session parameter, using a statement hint, or by specifying a parallel degree on the underlying tables. Using the MEDIUM service the degree of parallelism is limited to four (4). LOW Sessions connected to the Low database service get the lowest priority when the system is under resource pressure. Queries run serially unless you specify a parallel degree through a session parameter, using a statement hint, or by specifying a parallel degree on the underlying tables.