12th November 2021
4.00 PM AEDT
Top 20 FAQs on the Autonomous Database
Sandesh Rao
Database/Cloud Day
Event
Partners
1: How to get started with the
Autonomous Database Free Tier
Always Free services enable developers and students to learn, build and get hands-on
experience with Oracle Cloud for unlimited time
Anyone can try for an unlimited time the full functionality of:
• Oracle Autonomous Database
• Oracle Cloud Infrastructure including:
• Compute VMs
• Block and Object Storage
• Load Balancer
Free tier
Free tier – Tech spec
2 Autonomous Databases (Autonomous Data Warehouse or Autonomous Transaction
Processing), each with 1 OCPU and 20 GB storage
2 Compute VMs, each with 1/8 OCPU and 1 GB memory
2 Block Volumes, 100 GB total, with up to 5 free backups
10 GB Object Storage, 10 GB Archive Storage, and 50,000/month API requests
1 Load Balancer, 10 Mbps bandwidth
10 TB/month Outbound Data Transfer
500 million ingestion Datapoints and 1 billion Datapoints for Monitoring Service
1 million Notification delivery options per month and 1000 emails per month
STEP 1: Creating OML Users
•Go back to the Cloud
Console and open the
Instances screen. Find your
database, click the action
menu and select Service
Console.
Creating OML Users
Go to the Administration tab and click Manage Oracle ML Users to go to the OML user management page -
this page will allow you to manage OML users.
Creating OML Users
Once you have successfully signed in to OML the application home page will be displayed.
Overview of OML Home Page
The grey menu bar at the top of the screen provides links to the main OML menus for the application (left
corner) and the workspace/project and user maintenance on the right-hand side.
Exploring the OML Home Page
Disaster recovery terminology
Peer Databases: Two or more databases that are linked and replicated
Consist of a Primary database and Standby (copy of the primary) databases
Primary or Source Database: The main database that is actively being
Standby Database: A replica of the primary database which is constantly and
passively refreshing (ie. replicating) data from the primary
Primary Region: The region in which a user first provisions a primary database and enables
cross-region Autonomous Data Guard
Remote Region: The user-selected region in which to provision the standby from a database instance while enabling cross-region
Autonomous Data Guard.
Paired Regions: Two regions that are paired together to support X-ADG, such that a primary database may be provisioned in one of
the regions and its remote standby may be provisioned in the other region.
Recovery Point Objective (RPO): An organization's tolerance for data loss, after which business operations start to get severely
impacted
Recovery Time Object (RTO): An organization's tolerance for the unavailability (or downtime) of a service after which business
operations start to get severely impacted
London Frankfurt
If a regional failure occurs and your primary database is brought down, you may trigger a "Failover" from the remote standby
database to which you want to failover.
Failover across regions to the remote standby
• A failover is a role change - switching control from the primary database to the
standby database
• After a failover, a new remote standby for your new primary will be
automatically provisioned
….when the primary region becomes available again
• During a failover, the system automatically recovers as much data as possible
• Minimizing any potential data loss; there may be a few seconds or minutes of
data loss
• You would usually perform a failover in a true disaster scenario, - accepting
the few minutes of potential data loss to ensure getting your database back
online as soon as possible
Once your remote standby is provisioned, you will see a "Switchover" option on your database's console.
Switchover testing across regions with the remote standby
• Switchover from the remote standby database, while both your primary
and standby are healthy, performs a role change - of the primary
• Switching from the primary database to the remote standby database.
• May take several minutes, depending on the number of changes in the
primary database
• Switchover guarantees no data loss
• You would usually perform a Switchover to test your applications or
mid-tiers against this role change behaviour
Export Data As JSON To Object Storage
ADB now has a procedure to export a query as JSON directly to Object
Storage bucket.
The query can be an advanced query - includes joins or subqueries.
Specify format parameter with compression option to compress the
output files.
Use DBMS_CLOUD.DELETE_OBJECT to delete the files
OVERVIEW
BEGIN
DBMS_CLOUD.EXPORT_DATA(
credential_name => 'DEF_CRED_NAME',
file_uri_list => ‘bucketname/filename’,
query => 'SELECT * FROM DEPT’,
format => JSON_OBJECT('type' value 'json'));
END; /
HOW IT WORKS
Per-database with Instance Wallet selected:
• All existing database specific instance wallets will be void.
• Post rotation need to download new wallet to connect to database.
• NOTE - Regional wallets with all database certification keys continue to work
Regional level with Regional Wallet selected:
• Both regional and database specific instance wallets are voided.
• Post rotation need to download new regional or instance wallets to connect to any
database in region
• All user sessions are terminated for databases whose wallet is rotated.
• User session termination begins after wallet rotation completes, however this process
does not happen immediately.
New Option To Rotate Wallets For ADB
1
2
Most people do not like to configure wallets
• ADB used mTLS to establish the client-server
connection
• Both the client and the server have a special secret
key and its exchanged to be validated
• Going forward one can connect to ADB using TLS
instead of mTLS
• To make it secure
• To enable TLS on an ADB instance with a public
endpoint exposed
• One must have an Access Control List (ACL) in place
• Traffic outside of the VCN is blocked giving you
confidence that your connection is secured
mTLS or TLS
All data outside the database
• Files in Object Store buckets
Exposes the power of Oracle partitioning to external
data
• Partition pruning
• Partition maintenance
Enables order-of-magnitudes faster query
performance and enhanced data maintenance
Partitioned External Tables
…
2016,04,01 2016,04,02 2016,04,03
File-02 in
Object Store
Bucket
File-03 in
Object Store
Bucket
File-01 in
Object Store
Bucket
Note only use of DBMS_CLOUD syntax is supported
Partitioned External Tables
BEGIN DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(
table_name =>'PET1’,
credential_name =>'DEF_CRED_NAME’,
format => json_object('delimiter' value ‘,’,
'recorddelimiter' value 'newline’,
'characterset' value 'us7ascii’),
column_list => 'col1 number, col2 number, col3 number’
partitioning_clause => 'partition by range (col1) (
partition p1 values less than (1000) location (
‘https://swiftobjectstorage.us-ashburn-1 ... /file_01.txt') ,
partition p2 values less than (2000) location (
'https://swiftobjectstorage.us-ashburn-1 ... /file_02.txt'') ,
partition p3 values less than (3000) location (
'https://swiftobjectstorage.us-ashburn-1 ... /file_03.txt'') )
)
END;
/
Single table contains both internal (RDBMS) and external
partitions
• Full functional support, such as partial indexing,
partial read only, constraints, etc.
Partition maintenance for information lifecycle
management
• Currently limited support
• Enhancements in progress
Hybrid Partitioned Tables
…
2016,04,01 2016,04,02 2016,04, 03
File-02 in
Object Store
Bucket
File-01 in
Object Store
Bucket
DB Partition
Note only use of DBMS_CLOUD syntax is supported
Hybrid Partitioned Tables
BEGIN DBMS_CLOUD.CREATE_HYBRID_PART_TABLE(
table_name =>'HPT1’,
credential_name =>'OBJ_STORE_CRED’,
format => json_object('delimiter' value ',', ‘
recorddelimiter' value 'newline', ‘
characterset' value 'us7ascii’),
column_list => 'col1 number, col2 number, col3 number’
partitioning_clause => 'partition by range (col1)
(partition p1 values less than (1000) external location (
'https://swiftobjectstorage.us-ashburn-1 .../file_01.txt') ,
partition p2 values less than (2000) external location (
‘https://swiftobjectstorage.us-ashburn-1 .../file_02.txt') ,
partition p3 values less than (3000) ) )
END;
Automatic Partitioning
Automatic partitioning in ADB analyzes the application workload
Automatically applies partitioning to tables and their indexes to
improve performance or to allow better management of large tables
Automatic partitioning chooses from the following partition methods:
• INTERVAL AUTOMATIC: best suited for ranges of partition key values
• LIST AUTOMATIC: applies to distinct partition key values
• HASH: partitioning on the partition key's hash values
OVERVIEW
Automatic partitioning performs the following operations:
• Identify candidate tables for automatic partitioning by analyzing the
workload for selected candidate tables.
• By default, automatic partitioning uses the workload information
collected in an Autonomous Database for analysis
• Evaluate partition schemes based on workload analysis and
quantification and verification of the performance benefits:
1. Candidate empty partition schemes with synthesized
statistics are created internally and analyzed for
performance.
2. Candidate scheme with highest estimated IO reduction is
chosen as optimal partitioning strategy - internally
implemented to test and verify performance
3. If candidate partition scheme does not improve
performance automatic partitioning is not implemented
Implement optimal partitioning strategy, if configured to do so, for the
tables analyzed by the automatic partitioning procedures.
HOW IT WORKS
7: Set Patch Level When Creating
A Clone and retrieve Patch
Details
Set Patch Level When Creating A Clone
When you provision or clone an Autonomous
Database instance you can select a patch level to apply
upcoming patches.
There are two patch level options: Regular and Early.
The Early patch level allows testing upcoming patches one
week before they are applied as part of the regular patching
program
The console shows the patch level setting with the section
headed Maintenance.
OVERVIEW HOW IT WORKS
View Patch Details
View Autonomous Database maintenance event history to see details about past maintenance events
(requires ADMIN user)
OVERVIEW
View Patch Details HOW IT WORKS
SELECT * FROM DBA_CLOUD_PATCH_INFO;
SELECT * FROM DBA_CLOUD_PATCH_INFO WHERE PATCH_VERSION = 'ADBS-21.7.1.2';
• Monitor health, capacity, performance of ADB instances
• Uses metrics, alarms, and notifications
• Metrics accessible via OCI console or using APIs
Monitor Performance with ADB Metrics
1. CPU Utilization
2. Memory Utilization
3. Sessions
4. Failed Connections
5. Execute Count
6. Queued Statements
7. Running Statements
8. Failed Logons
9. Current Logons
10. Transaction Count
11. User Calls
12. Parse Count (Total)
Available Service Metrics
Autonomous Database Details page
provides top 6 view of library of service
metrics.
Viewing Top 6 Metrics on ADB
Console
Complete library of
only metrics
available via
the OCI Console
Service Metrics page
or by using the
Monitoring API
Viewing Full Library Database Metrics
Data Safe audit retention time increased
Original Data Safe audit retention was for a
maximum of 12 months
Increased maximum retention to 84 months (7 years)
with the online retention period of 12 months
supplemented by an additional archive retention
period of 72 months (six years)
Screenshot below shows the Data Safe console
where you can configure the retention period
OVERVIEW HOW IT WORKS
Customer Managed Keys
ADB provides two options for Transparent Data Encryption (TDE) to encrypt data in the database:
• Oracle-managed encryption keys
• Customer-managed encryption keys
• Customer managed keys integrates with Oracle Cloud Infrastructure Vault service
• When rotating customer-managed master encryption key ADB generates a new TDE master key
• ADB uses new TDE master key to re-encrypt tablespace encryption keys that encrypt and
decrypt your data.
• Operation is fast and does not require database downtime
OVERVIEW
One-click start to analyzing with graphs in Oracle Autonomous Database
Graph Studio Now Fully GA
Graph Studio provides comprehensive set of features:
• Graph modeling tool to map relational data to graphs
• Launched directly from OCI Console
• Browser-based notebooks for interactive analysis and collaboration
• Integrated graph visualization
• PGQL: SQL-like property graph query language
• Nearly 60 pre-built property graph algorithms
• PageRank, Community Detection, Shortest path, etc.
OVERVIEW
One-click start to analyzing with graphs in Oracle Autonomous Database
Graph Studio Now Fully GA
HOW IT WORKS
One-click start to analyzing with graphs in Oracle Autonomous Database
Graph Studio Now Fully GA
HOW IT WORKS
One-click start to analyzing with graphs in Oracle Autonomous Database
Graph Studio Now Fully GA
HOW IT WORKS
Use Resource Principal To Access OCI Resources
1) Create a dynamic group
Tells IAM that a given Autonomous Database should be able to read from the Object
Storage buckets and objects that are in a given compartment
HOW IT WORKS
In the OCI console, go to ‘Identity and Security’ -> ‘Dynamic Groups’ -> ‘Create Dynamic Group’
To include only ADB-S instance to this dynamic group, add the instance OCID in the following rule:
resource.id = 'ocid1.autonomousdatabase.oc1.iad.osbgdthsnmakytsbnjpq7n37q'
Use Resource Principal To Access OCI Resources
2) Create a policy
Allow this resource to access our Object Storage bucket that resides in a given compartment
HOW IT WORKS
In the OCI console, go to ‘Identity and Security’ -> ‘Policies’-> ‘Create Policy’
Add your policy statement in plain text or use the Policy Builder.
Allow dynamic-group ctuzlaDynamicGroup to read buckets in compartment ctuzlaRPcomp
Allow dynamic-group ctuzlaDynamicGroup to read objects in compartment ctuzlaRPcomp
Note: It’s also possible to allow higher levels of access as described in the documentation
Use Resource Principal To Access OCI Resources
3) Enable resource principal in ADB-S
Resource principal is not enabled by default in ADB-S.
In order to be able to use resource principal in our ADB-S instance, we need to enable it using
the DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL procedure:
HOW IT WORKS
As ADMIN user, execute the following statement:
EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL();
PL/SQL procedure successfully completed.
Use Resource Principal To Access OCI Resources
4) Verify that resource principle is enabled:
HOW IT WORKS
SELECT owner, credential_name
FROM dba_credentials
WHERE credential_name = 'OCI$RESOURCE_PRINCIPAL' AND owner = 'ADMIN';
OWNER CREDENTIAL_NAME
----- ----------------------
ADMIN OCI$RESOURCE_PRINCIPAL
Use Resource Principal To Access OCI Resources
5) Optionally, enable other database users to call DBMS_CLOUD APIs using resource principal
HOW IT WORKS
EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL(username => 'ADB_USER');
PL/SQL procedure successfully completed.
Use Resource Principal To Access OCI Resources
6) Load data from Object Storage using resource principal
HOW IT WORKS
CREATE TABLE CHANNELS
(channel_id CHAR(1),
channel_desc VARCHAR2(20),
channel_class VARCHAR2(20)
);
Table CHANNELS created.
BEGIN
DBMS_CLOUD.COPY_DATA(
table_name =>'CHANNELS',
credential_name =>'OCI$RESOURCE_PRINCIPAL',
file_uri_list =>'https://objectstorage.us-ashburn-
1.oraclecloud.com/n/adwc4pm/b/ctuzlaBucket/o/chan_v3.dat',
format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value 'true')
);
END;
/
PL/SQL procedure successfully completed.
• For data loading from files in the Cloud
• Store your object storage credentials
• Use the procedure DBMS_CLOUD.COPY_DATA to load
data
• The source file in this example is channels.txt
Load data using DBMS_CLOUD
File-02 in
Object Store
Bucket
File-03 in
Object Store
Bucket
File-01 in
Object Store
Bucket
SET DEFINE OFF BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adwc_user@example.com',
password => 'password' ); END; /
Load data using DBMS_CLOUD
CREATE TABLE CHANNELS (channel_id CHAR(1), channel_desc VARCHAR2(20),
channel_class VARCHAR2(20) );
BEGIN DBMS_CLOUD.COPY_DATA( table_name =>'CHANNELS', credential_name =>'DEF_CRED_NAME’,
file_uri_list =>
'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/
bucketname/o/channels.txt', format => json_object('delimiter' value ',') );
END;
BEGIN DBMS_CLOUD.COPY_DATA( table_name =>'CHANNELS', credential_name =>'DEF_CRED_NAME’,
file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/
bucketname/o/exp01.dmp,
https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/
bucketname/o/exp02.dmp', format => json_object('type' value 'datapump') );
END;
Load data using DBMS_CLOUD
BEGIN DBMS_CLOUD.COPY_COLLECTION( collection_name => 'fruit', credential_name =>
'DEF_CRED_NAME', file_uri_list => 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/
namespace-string/b/fruit_bucket/o/myCollection.json’,
format => JSON_OBJECT('recorddelimiter' value '''n''') );
END;
BEGIN DBMS_CLOUD.COPY_COLLECTION( collection_name => 'fruit2', credential_name =>
'DEF_CRED_NAME', file_uri_list => 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/
namespace-string/b/json/o/fruit_array.json’,
format => '{"recorddelimiter" : "0x''01''", "unpackarrays" : TRUE}' );
END;
Load data using DBMS_CLOUD
SELECT table_name, owner_name, type, status, start_time, update_time,
logfile_table, badfile_table FROM user_load_operations WHERE
type = 'COPY’;
TABLE_NAME OWNER_NAME TYPE STATUS START_TIME UPDATE_TIME LOGFILE_TABLE BADFILE_TABLE
------------------------------------------------------------------------------------
FRUIT ADMIN COPY COMPLETED 2020-04-23 22:27:37 2020-04-23 22:27:38 "" ""
FRUIT ADMIN COPY FAILED 2020-04-23 22:28:36 2020-04-23 22:28:37 COPY$2_LOG COPY$2_BAD
SELECT credential_name, username, comments FROM all_credentials;
CREDENTIAL_NAME USERNAME COMMENTS
---------------------------–----------------------------- --------------------
ADB_TOKEN user_name@example.com {"comments":"Created via
DBMS_CLOUD.create_credential"}
DEF_CRED_NAME user_name@example.com {"comments":"Created via
DBMS_CLOUD.create_credential"}
19 - ADB now supports SQL
access to tenancy details
20 - Always free - Oracle APEX
Application Development
Always free - Oracle APEX Application Development
All-inclusive and fully managed service with up to 1 OCPU (shared) and 20 GB of storage
Can support approximately 3-6 users accessing the service simultaneously, plus an unlimited number of applications,
developer accounts, and end-user accounts
Easily upgrade to Paid APEX Service with a single click to provision additional OCPUs and storage
HOW IT WORKS
SQL
REST
MONGO
API
New editing experience
Insight/type ahead support for data dictionary
and what’s already in your editor
Access to the Command Palette and some
powerful editor widgets
Different editor look-and-feel schemes.
Load data
Pick a data location such as local file, a remote database or a Cloud object store
and wizard will guide you through the process of loading your data.
Business model
Build sophisticated models by identifying dimensions, hierarchies and measures
within a data set.
Define aggregation rules
for measures such as:
sum, average etc.
Thank You
Any Questions ?
Sandesh Rao
VP AIOps for the Autonomous Database
@sandeshr
https://www.linkedin.com/in/raosandesh/
https://www.slideshare.net/SandeshRao4