Autonomous Transaction Processing (ATP) - the second in the family of Oracle’s Autonomous Databases – offers Oracle DBAs the ability to apply a force multiplier for their OLTP database application workloads. However, it’s important to understand both the benefits and limitations of ATP before migrating any workloads to that environment. I'll offer a quick but deep dive into how best to take advantage of ATP - including how to load data quickly into the underlying database – and some ideas on how ATP will impact the role of Oracle DBA in the immediate future. (Hint: Think automatic transmission instead of stick-shift.)
2. My Credentials
• 40 years of database-centric IT experience
• Oracle DBA since 2001
• Oracle 9i, 10g, 11g, 12c OCP and ADWC
• Oracle ACE Director since 2014
• ODTUG Database Committee Lead
• Editor of ODTUG TechCeleration
• Oracle-centric blog (Generally, It Depends)
• Regular speaker at Oracle OpenWorld, COLLABORATE,
KSCOPE, and international and regional OUGs
E-mail me at jczuprynski@zerodefectcomputing.com
Follow me on Twitter (@JimTheWhyGuy)
Connect with me on LinkedIn (Jim Czuprynski)
3. Our Agenda
•Autonomous Transaction Processing (ATP)
•Creating, Controlling, and Monitoring an ATP Instance
•Loading Data Into ATP
•Monitoring ATP Performance in Multiple Dimensions
•Demo: How ATP Reacts to Overwhelming Workloads
•Conclusions and References
4. Moving to Autonomous DB: A Suggested Business Process Flow
Assess
• Is my
application
workload really
ready to move
to ATP?
Plan
•What migration
strategy is most
appropriate?
•How long of an
outage can my
production
application afford?
Migrate
• Transfer data
using chosen
migration
strategy, and
keep it
synchronized
Monitor
• Watch for any
unexpected
service outage /
performance
degradation /
user complaints
Tweak
• Should any
application
workloads shift to
a different ATP
instance service?
As an evolving Oracle Enterprise Data Architect, it’s crucial to recognize
and embrace the main thrust of Autonomous DB:
No More Knobs!
6. ATP: Creating a New Instance (1)
Specify your cloud account …
1
… and get logged in
2
Access your Cloud Dashboard, then choose what kind of
instance to create
3 Build a new compartment for your ATP instance …
5
… and check out the other compartments available
5
7. ATP: Creating a New Instance (2)
Specify a compartment and
administrator credentials …
1
… and ATP instance creation begins!
2
ATP instance now shows up in chosen compartment …
3
… and your first ADW instance is now ready to access
4
8. ATP: Creating a New Instance (3)
Connect to the new instance
using the ADMIN account …
1
Here’s your first look at the ATP Service Console!
2
Request new credentials for access …
3
… supply a robust password …
4
… and save the
new credentials in
TNSNAMES home
5
10. Examples of Automatically Provided ATP Database Services
Service Name Usage Parallelism?
Resource
Management Plan
Shares
Concurrency Usage Recommendations
PDBSOE_TPURGENT OLTP Manual 12 Unlimited
Highest priority service aimed at time-
critical OLTP operations
PDBSOE_TP
OLTP 1 8
Unlimited Use for typical (non-time-critical) OLTP
operations
PDBSOE_HIGH Queries CPU_COUNT 4 3 Queries
When the system is under resource
pressure, these sessions will get highest
priority
PDBSOE_MEDIUM Queries 4 2
1.25 x
CPU_COUNT
queries
When the system is under resource
pressure, these sessions receive medium
priority
PDBSOE_LOW Queries 1 1
2 x CPU_COUNT
queries
When the system is under resource
pressure, these sessions receive lowest
priority
See the detailed documentation for complete information on how these database services work.
11. ATP: Migrating and Loading Data
11
Am I Empowered To … 18c ATP
Load data with SQL*Loader or SQLDeveloper? Yes
Yes, but source files should
reside “nearby” by network
Load data with Data Pump Import? Yes
Yes, but export dump set
resides in object storage
Export data with Data Pump Export? Yes
Yes, but export dump set
resides in object storage
Synchronize data with GoldenGate*? Yes Yes, within certain limits
*See this documentation for complete information on GoldenGate capabilities
for Autonomous Databases.
13. ATP: Monitoring Performance
in Multiple Dimensions
• Generating a Simple Sample Workload
• Monitoring Performance With the ATP Service Console
• Monitoring Performance with MonitorDB Utility
• Demonstration: Generating a “Nightmare” Workload
14. ATP: Monitoring Instance and Statement Performance
How is the ATP instance performing right now, and are there any evident “pushbacks” against a running workload?
1 Performance can also be viewed for a particular narrower time period
2
Viewing the performance of running as well as completed individual statements
3
Viewing an individual SQL statement’s performance …
4 … the statement’s execution plan …
5
… and how much parallelism is being consumed
6
15. ATP: Turning the “Big Red Dial”
Requesting CPU scale-up …
1 Scale-up in progress …
2 … and successful CPU scale-up completed
3
Workload Exhaustion Demonstration:
Five different workloads simultaneously executed against SOE schema
After scale-up, TPURGENT
performance improves …
… the number of executing
statements increases …
… and there’s a decrease
in queued statements
17. 18c vs. ATP: Comparison of Features
18
Am I Empowered To … 18c ATP
Add my own schemas? Yes Yes
Connect applications directly via TNSNAMES? Yes Yes
Elastically upsize or downsize CPUs, memory, and storage? Yes Yes
Create my own CDBs and PDBs? Yes No
Clone a PDB to the same or another CDB? Yes No
Build my own tablespaces? Yes No
Modify memory pool sizes (e.g. SGA_SIZE)? Yes No
Modify security settings (e.g. keystores)? Yes No
Connect directly as SYS? Yes No
Build a PDB using RMAN backups? Yes No
Connect with Enterprise Manager Cloud Control for monitoring?
Via Proxy
Agent
No
18. ATP: Loading Data Via DBMS_CLOUD.COPY_DATA (1)
Creating credentials for accessing file system:
1
SQL> BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'extb_tpcds'
,username => 'IOUGCloudTrial@ioug.org'
,password => '(;n<T1#-MpY>4u>_yilK'
);
END;
/
Creating the new table:
2
SQL> CREATE TABLE tpcds.customer_credit_ratings (
ccr_customer_number NUMBER(7)
,ccr_last_reported DATE
,ccr_credit_rating NUMBER(5)
,ccr_missed_payments NUMBER(3)
,ccr_credit_maximum NUMBER(7)
)
STORAGE (INITIAL 8M NEXT 4M)
PARTITION BY RANGE (ccr_last_reported)
INTERVAL(NUMTOYMINTERVAL(3, 'MONTH'))
(PARTITION ccr_oldest
VALUES LESS THAN (TO_DATE('1998-04-01', 'yyyy-mm-dd'))
);
Table created.
Loading data with DBMS_CLOUD.COPY_DATA:
3
SQL> BEGIN
DBMS_CLOUD.COPY_DATA(
table_name => 'CUSTOMER_CREDIT_RATINGS'
,credential_name => 'EXTB_TPCDS'
,file_uri_list =>
'https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/iougcloudtrial/
ADWExternalTables/CreditScoring_Current.dat'
,schema_name => 'TPCDS'
,field_list => 'ccr_customer_number CHAR(08),ccr_last_reported CHAR(10)
,ccr_credit_rating CHAR(05),ccr_missed_payments CHAR(03)
,ccr_credit_maximum CHAR(07)’
,format => '{"delimiter" : "|" , "dateformat" : "YYYY-MM-DD"}');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('ERROR:' || SQLCODE || ' ' || SQLERRM);
END;
/
PL/SQL procedure successfully completed.
19. ATP: Loading Data Via DBMS_CLOUD. COPY_DATA (2)
Monitoring a running load task …
1 … even when it fails to complete successfully!
2
Show status of running
load operations:
3
SET LINESIZE 132
SET PAGESIZE 20000
COL owner_name FORMAT A08 HEADING "Owner"
COL table_name FORMAT A24 HEADING "Table|Loaded"
COL type FORMAT A08 HEADING "Operation"
COL status FORMAT A10 HEADING "Status"
COL start_dtm FORMAT A19 HEADING "Started At"
COL update_dtm FORMAT A19 HEADING "Finished At"
COL logfile_table FORMAT A12 HEADING "LOGFILE|Table"
COL badfile_table FORMAT A12 HEADING "BADFILE|Table"
SELECT
owner_name
,table_name
,type
,status
,TO_CHAR(start_time,'YYYY-MM-DD HH24:MI:SS') start_dtm
,TO_CHAR(update_time,'YYYY-MM-DD HH24:MI:SS') update_dtm
,logfile_table
,badfile_table
FROM user_load_operations
WHERE type = 'COPY'
ORDER BY start_time DESC;
Table LOGFILE BADFILE
Owner Loaded Operatio Status Started At Finished At Table Table
-------- ------------------------ -------- ---------- ------------------- ------------------- ------------ ------------
TPCDS CUSTOMER_CREDIT_RATINGS COPY COMPLETED 2018-10-08 11:00:59 2018-10-08 11:03:12 COPY$38_LOG COPY$38_BAD
TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:51:09 2018-10-08 10:53:16 COPY$37_LOG COPY$37_BAD
TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:50:49 2018-10-08 10:50:49
TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:50:03 2018-10-08 10:50:03
TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:34:33 2018-10-08 10:35:56 COPY$34_LOG COPY$34_BAD
Show the
resulting
LOG File:
4
SQL> SELECT *
FROM copy$38_log;
LOG file opened at 10/08/18 16:03:01
Total Number of Files=1
Data File: https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/iougcloudt
Log File: COPY$38_144722.log
LOG file opened at 10/08/18 16:03:01
Bad File: COPY$38_355882.bad
Field Definitions for table COPY$WQPDD1Q3X2892USR6RY7
Record format DELIMITED BY
Data in file has same endianness as the platform
Rows with all null fields are accepted
Fields in Data Source:
CCR_CUSTOMER_NUMBER CHAR (8)
Terminated by "|"
CCR_LAST_REPORTED CHAR (10)
Date datatype DATE, date mask YYYY-MM-DD
Terminated by "|"
CCR_CREDIT_RATING CHAR (5)
Terminated by "|"
CCR_MISSED_PAYMENTS CHAR (3)
Terminated by "|"
CCR_CREDIT_MAXIMUM CHAR (7)
Terminated by "|"
Date Cache Statistics for table COPY$WQPDD1Q3X2892USR6RY7
Date conversion cache disabled due to overflow (default size: 1000)
20. ATP: Migrating Data Via DataPump Export and Import
Export data from source database:
1
$> expdp vevo/vevo@pdbvevo parfile=ADW_VEVO.expdp
#####
# File: ADW_VEVO.expdp
# Purpose: DataPump Export parameter file for VEVO schema
# 1.) Exclude all:
# - Clusters
# - Database Links
# - Indexes and Index Types
# - Materialized Views, Logs, and Zone Maps
# 2.) For partitioned tables, unload all table data in a single operation (rather
# than unload each table partition as a separate operation) for faster loading
# 3.) Use 4 degrees of parallelism and write to multiple dump files
#####
DIRECTORY=DATA_PUMP_DIR
EXCLUDE=INDEX, CLUSTER, INDEXTYPE, MATERIALIZED_VIEW, MATERIALIZED_VIEW_LOG,
MATERIALIZED_ZONEMAP, DB_LINK
DATA_OPTIONS=GROUP_PARTITION_TABLE_DATA
PARALLEL=4
SCHEMAS=vevo
DUMPFILE=export%u.dmp
Export: Release 18.0.0.0.0 - Production on Sat Sep 1 19:12:37 2018
Version 18.1.0.0.0
Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 18c EE Extreme Perf Release 18.0.0.0.0 - Production
Starting "VEVO"."SYS_EXPORT_SCHEMA_01": vevo/********@pdbvevo parfile=ADW_VEVO.expdp
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . .
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "VEVO"."T_CANVASSING" 27.77 MB 539821 rows
. . exported "VEVO"."T_STAFF" 13.86 KB 26 rows
. . exported "VEVO"."T_CAMPAIGN_ORG" 8.031 KB 25 rows
. . exported "VEVO"."T_VOTING_RESULTS" 80.90 MB 1887761 rows
. . exported "VEVO"."T_VOTERS" 84.84 MB 180000 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "VEVO"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for VEVO.SYS_EXPORT_SCHEMA_01 is:
/u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export01.dmp
/u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export02.dmp
/u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export03.dmp
/u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export04.dmp
Job "VEVO"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Sep 1 19:13:17 2018 elapsed 0 00:00:39
Transfer export dump set to Object Container
2
Set up credentials for access:
3
SQL> CONNECT admin/'N0M0reKn0bs#@tpcds_high;
SQL> BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => ‘DPI_VEVO’
,username => 'jczuprynski@zerodefectcomputing.com'
,password => 'N0M0reKn0bs#'
);
END;
/
SQL> ALTER DATABASE PROPERTY
SET default_credential = 'ADMIN.DPI_VEVO';
Add new schema into ADW Instance:
4
SQL> CONNECT admin/'N0M0reKn0bs#@tpcds_high;
CREATE USER vevo
IDENTIFIED BY N0M0reKn0bs#
TEMPORARY TABLESPACE temp
PROFILE DEFAULT;
GRANT RESOURCE TO vevo;
GRANT CREATE PROCEDURE TO vevo;
GRANT CREATE PUBLIC SYNONYM TO vevo;
GRANT CREATE SEQUENCE TO vevo;
GRANT CREATE SESSION TO vevo;
GRANT CREATE SYNONYM TO vevo;
GRANT CREATE TABLE TO vevo;
GRANT CREATE VIEW TO vevo;
GRANT DROP PUBLIC SYNONYM TO vevo;
GRANT EXECUTE ANY PROCEDURE TO vevo;
GRANT READ,WRITE ON DIRECTORY data_pump_dir TO vevo;
Import data into ADW instance:
5
$> ./impdp admin/IOUG1sAwesome@TPCDS_HIGH
DIRECTORY=DATA_PUMP_DIR
VERSION=18.0.0
REMAP_SCHEMA=vevo:vevo
DUMPFILE=default_credential:https://swiftobjectstorage.us-ashburn-
1.oraclecloud.com/v1/iougcloudtrial/DP_VEVO/export%U.dmp
PARALLEL=4
PARTITION_OPTIONS=MERGE
TRANSFORM=SEGMENT_ATTRIBUTES:N
TRANSFORM=DWCS_CVT_IOTS:Y
TRANSFORM=CONSTRAINT_USE_DEFAULT_INDEX:Y
EXCLUDE=index,cluster,indextype,materialized_view,materialized_view_log
,materialized_zonemap,db_link
Import: Release 12.2.0.1.0 - Production on Sun Sep 2 21:51:32 2018
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 18c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "ADMIN"."SYS_IMPORT_FULL_02" successfully loaded/unloaded
Starting "ADMIN"."SYS_IMPORT_FULL_02": admin/********@TPCDS_HIGH DIRECTORY=DATA_PUMP_DIR VERSION=18.0.0
. . .
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "VEVO"."T_STAFF" 13.86 KB 26 rows
. . imported "VEVO"."T_CANVASSING" 27.77 MB 539821 rows
. . imported "VEVO"."T_CAMPAIGN_ORG" 8.031 KB 25 rows
. . imported "VEVO"."T_VOTING_RESULTS" 80.90 MB 1887761 rows
. . imported "VEVO"."T_VOTERS" 84.84 MB 180000 rows
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "ADMIN"."SYS_IMPORT_FULL_02" successfully completed at Mon Sep 3 02:52:42 2018 elapsed 0 00:01:06
21. ATP: Advantages of “No More Knobs“
Remember, ATP (like ADW) is all about no more knobs … and that’s really advantageous!
• Service instance can be stopped and restarted as necessary
• Useful for conserving Cloud credits
• Easy to connect to
• Only a few entries in SQLNET.ORA file and TNSNAMES.ORA are required
• Regular RMAN backups taken automatically on regular nightly schedule
• No instance tuning required
• Memory pool sizes are already locked in
• Parallelism is automatically derived depending on number of OCPUs and service name selected for connection
• Only appropriate licensing options are included
• No worries about accidently incurring potential additional licensing fees
• Direct-path loads are fully supported
• DataPump Export and Import provides for rapid provisioning from existing databases
• GoldenGate support has been added as well
22. ATP: Summary of Appropriate Use Cases
ATP is most appropriate for the following application workload requirements and environments:
• Mixed workloads, including OLTP and moderate reporting
• Exadata storage software caches most frequently used database blocks in flash memory on storage cells
• Up to 128 OCPUs and 128 TB of storage can be requested per ATP instance (subject to availability within instance’s region)
• Ideally, OLTP application workload(s) should already be well-tuned to avoid surprises
• Virtually no DBA resources required for database management
• No instance tuning is necessary
• Selection of appropriate database service for the workload is really the only choice required
• Parallelism derived from database service selected and number of OCPUs available
• Scale-up and scale-down requires just a single button push
• Database migration and transformation only limited by desired / appropriate transferal methods
• Fresh load: DBMS_CLOUD.COPY_DATA, SQL*Loader, or INSERT INTO … SELECT FROM an EXTERNAL Table
• Existing schema(s): DataPump Import
• Tight synchronization required: GoldenGate
• Extremely large data transfers possible via Oracle Cloud Infrastructure Data Transfer Appliance
24. ATP: Database Feature Limitations
Several database features normally available for an OCI-resident Oracle
database are restricted for ATP instances:
Object / Permission / Feature Restrictions
Tablespaces Cannot be added, removed, or modified
Parallelism Enabled by default and based on number of OCPUs and chosen database service for
application to connect
Compression HCC compression is not enabled by default, but a compression clause will be honored
Result Caching Enabled by default for all statements; cannot be changed
Node File System and OS No direct access permitted
Database Links to Other Databases Prohibited to preserve security features
PL/SQL Calls Using DB Links Likewise, prohibited
Parallel DML Enabled by default, but can be disabled at session level:
ALTER SESSION DISABLE PARALLEL DML;
See Restrictions for Database Features for a complete list of these ATP limitations.
25. ATP: Permitted Changes to Initialization Parameters
Only the following database initialization parameters may be modified:
Initialization Parameters That Can Be Modified
APPROX_FOR_AGGREGATION OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES*
APPROX_FOR_COUNT_DISTINCT OPTIMIZER_IGNORE_HINTS
APPROX_FOR_PERCENTILE OPTIMIZER_IGNORE_PARALLEL_HINTS
AWR_PDB_AUTOFLUSH_ENABLED PLSCOPE_SETTINGS
PLSQL_DEBUG
PLSQL_WARNINGS
PLSQL_OPTIMIZE_LEVEL
PLSQL_CCFLAGS
Most NLS parameters TIME_ZONE*
* Only via ALTER SESSION
See Restrictions to Database Initialization Parameters for more information on permissible changes.
26. ATP: Unavailable Options and Packs
The following database options and packs are not enabled for ATP instances:
Non-Usable Database Options and Packs
Oracle Application Express Oracle Spatial and Graph
Oracle Tuning Pack Oracle Data Masking and Subsetting Pack
Oracle Real Application Testing Oracle R capabilities of Oracle Advanced Analytics
Oracle Database Vault Oracle Industry Data Models
Oracle Data Masking and Subsetting Pack Oracle Text
Oracle Database Lifecycle Management Pack Oracle Multimedia
Oracle Cloud Management Pack for Oracle
Database
Java in DB
Oracle OLAP Oracle XML DB
Oracle Workspace Manager Context
See Restrictions for Database Features for complete information on unusable database options and packs.
27. ATP: Unavailable SQL Commands
The following SQL commands cannot be executed against an ATP instance:
See Restrictions for SQL Commands for complete information on these unavailable SQL commands.
SQL Command Reason for Unavailability
ADMINISTER KEY
MANAGEMENT
PDB-level security tightly enforced
CREATE / ALTER / DROP
TABLESPACE
Tablespaces are strictly controlled
ALTER PROFILE Resource limits and security restraints tightly enforced
CREATE DATABASE LINK Self-containment and security
CREATE INDEX [BITMAP] BITMAP indexes are not permitted
28. Useful Resources and Documentation
• ATP Documentation:
https://docs.oracle.com/en/cloud/paas/atp-cloud/index.html
• Dominic Giles’s Blog on Setting Up SwingBench for ATP:
http://www.dominicgiles.com/blog/files/c84a63640d52961fc28f750570888cdc-169.html
• Oracle Autonomous and Secure Cloud Services Blog:
https://blogs.oracle.com/autonomous-and-secure-cloud-services
• Maria Colgan on What to Expect From ATP Cloud:
https://blogs.oracle.com/what-to-expect-from-oracle-autonomous-transaction-processing
https://blogs.oracle.com/what-to-expect-from-oracle-autonomous-transaction-processing-cloud
Hinweis der Redaktion
The basic characteristics of these consumer groups are:
TPURGENT
The highest priority application connection service for time critical transaction processing operations.
This connection service supports manual parallelism.
TP
This is the typical application connection service for transaction processing operations.
This connection service does not run with parallelism.
HIGH
Sessions connected to the High database service get the highest priority when the system is under resource pressure.
Queries run serially unless you specify a parallel degree through a session parameter, using a statement hint, or by specifying a parallel degree on the underlying tables.
MEDIUM
Sessions connected to the Medium database service get medium priority when the system is under resource pressure.
Queries run serially unless you specify a parallel degree through a session parameter, using a statement hint, or by specifying a parallel degree on the underlying tables. Using the MEDIUM service the degree of parallelism is limited to four (4).
LOW
Sessions connected to the Low database service get the lowest priority when the system is under resource pressure.
Queries run serially unless you specify a parallel degree through a session parameter, using a statement hint, or by specifying a parallel degree on the underlying tables.